Geodesia is a European research lab. Our work bridges geometric deep learning, mechanistic interpretability, and the safety of small open-source LLMs. Every output is a peer-reviewed paper, a patent, and a production capability. G-1, our first commercial product, is live. The next โ GLAD-Manifold, a physical, geometric reinvention of the Transformer and the foundation architecture for future AI and reasoning models โ is in closed alpha.
A small open-source LLM can be downloaded by anyone, fine-tuned by anyone, and deployed by anyone. None of that solves whether its outputs are safe, grounded, explainable, or compliant. The trust layer is a first-class research artifact โ it is what we work on.
Open and fine-tuned LLMs lag frontier closed models on safety by every measurable axis: jailbreak resilience, prompt-injection containment, factual grounding, refusal calibration. We close the gap at the runtime layer โ without modifying weights โ by reading the geometry of the model's own internal representations.
16 latent-space centroids trained on adversarial corpora detect jailbreaks, prompt injection, and policy violations before the model is even called. AUROC 0.82 โ +79% over base-model refusals. Latency <5 ms.
Neural Symbolic Potentials. Reads the trajectory of the response in latent space and computes four geometric features โ max coherence, smoothness, jerk, context gap. Combined into a single hallucination score. AUROC 0.96. State-of-the-art on-premise.
The cascade problem โ one hallucination corrupting an entire multi-agent pipeline โ is the single hardest open problem in agentic AI. Our research treats every agent hop as a checkpoint and applies per-hop scoring with cryptographic credit assignment.
A regulator does not want a heatmap. They want a mechanistic, auditable explanation of why the system did what it did. We have authored two peer-reviewed methods that meet that bar โ both with mathematically guaranteed convergence, both shipping in G-1.
An N-dimensional, model- and loss-agnostic XAI methodology with mathematically guaranteed convergence. Validated on 1D audio, 2D image, 3D volumetric medical, and anatomical landmark detection. Unlike LIME / SHAP / GradCAM, MuPAX preserves โ even enhances โ model accuracy when masking, because it captures only the truly important patterns.
A deterministic, model-independent method for extracting only the signals a black-box model recognizes as important. Mathematically proven convergence. Designed for time-varying signals (audio) and extensible to images, video, and 3D. Validated on COVID-19 audio diagnostics, Parkinson's voice recordings, and music classification โ outperforming LIME, SHAP, GradCAM on almost all metrics.
We only ship XAI methods with formal convergence guarantees. Heuristic post-hoc explanations are not court-quality evidence and we do not pretend otherwise.
Explanations are derived from the model's own internal states โ gradients, attention, hidden activations. No surrogate models. No rationalization layers stacked on top.
Three speed tiers in production: Occlusion (3โ8 s, fast scan), Integrated Gradients (60โ120 s, axiomatic), MuPAX (30โ180 s, court-quality).
GLAD-Manifold โ Geometric Learning of Action Dynamics on Riemannian Manifold โ is our next commercial product and the foundation architecture for the AI and reasoning systems that will follow the Transformer. It reinvents attention as a physical interaction field over a curved representation manifold: not a fixed similarity computation, but a context-adaptive geometry that the network learns end-to-end. The same machinery that gives it a strictly richer hypothesis class also gives it the ability to act as a world model โ to internalise the dynamics of action, state and consequence โ which is what the next wave of reasoning systems requires.
Where conventional Transformers expose a single, rigid mechanism for token interaction, GLAD treats the very shape of that interaction as part of what the network learns. The result is a strictly richer hypothesis class with markedly stronger compositional and generalisation behaviour, while preserving the operational profile that production deployments depend on.
Three consequences matter most. The architecture's interaction surface adapts to the task, rather than the task adapting to the architecture. Pre-trained checkpoints from the conventional Transformer family transfer into GLAD without retraining. And โ most usefully for our applications โ the geometric structure exposes natural intrinsic loci where trust, safety, grounding and alignment can live as properties of the representation, not as filters bolted on top.
Rather than fixing the way tokens influence each other up front, GLAD lets that influence be shaped by the data and the context โ at every layer, every step. Capacity to express structure the conventional architecture cannot reach.
The conventional Transformer sits inside the GLAD family as a particular limit. Existing pre-trained checkpoints transfer in directly โ turning every model already in production into a candidate base for our trust layer.
The architecture's geometric structure exposes natural intrinsic positions for trust, safety and grounding mechanisms โ earlier signal, lower latency, mechanistically grounded inside the representation rather than stacked on the output.
G-1 is the first commercial output of the lab. Each subsequent product takes one of our research thrusts and ships it as enterprise-grade infrastructure.
Non-invasive runtime that wraps any open-source LLM via vLLM. Frontier-grade safety, hallucination control, full EU AI Act compliance. Live with enterprise design partners across financial services, insurance, and healthcare.
A reinvention of the Transformer: physical, geometric, and built on a learned Riemannian manifold. Where the conventional Transformer fixes a single mechanism of token interaction, GLAD-Manifold lets the very geometry of that interaction be learned end-to-end โ yielding a strictly richer hypothesis class with markedly stronger compositional and reasoning behaviour, and the ability to act as a world model capturing the dynamics of action, state and consequence. The architectural foundation for the AI and reasoning systems that come after the Transformer. Closed alpha with research partners.
The natural commercial endpoint of the architecture: a frontier-class reasoning model trained natively on GLAD-Manifold, with safety, grounding and alignment expressed inside the representation space from the very first token. European-built. Sovereign-deployable. Aligned by construction. We will ship when the science is right.
Vincenzo Dentamaro, Giuseppe Pirlo
N-dimensional, problem- and loss-agnostic XAI methodology with mathematically guaranteed convergence. Outperforms LIME, SHAP, GradCAM across audio, image, volumetric medical, and anatomical landmark tasks.
Read paper โVincenzo Dentamaro, Giuseppe Pirlo
A deterministic, model-independent XAI method with mathematically proven convergence. Validated across COVID-19 audio diagnostics, Parkinson's voice recordings, and music classification.
Read paper โFull publication list, talks, and reading recommendations from the team. We collaborate with researchers globally โ please reach out.