Frontier AI Research Lab

Three frontiers.
One trust layer.

Geodesia is a European research lab. Our work bridges geometric deep learning, mechanistic interpretability, and the safety of small open-source LLMs. Every output is a peer-reviewed paper, a patent, and a production capability. G-1, our first commercial product, is live. The next โ€” GLAD-Manifold, a physical, geometric reinvention of the Transformer and the foundation architecture for future AI and reasoning models โ€” is in closed alpha.

Three research pillars Product roadmap โ†’
Citations10,000+ combined
IPPatent pending ยท EU
AffiliationsBari ยท Oxford ยท Stanford
ProductsG-1 live ยท GLAD-Manifold next

Capability has been democratized.
Trust has not.

A small open-source LLM can be downloaded by anyone, fine-tuned by anyone, and deployed by anyone. None of that solves whether its outputs are safe, grounded, explainable, or compliant. The trust layer is a first-class research artifact โ€” it is what we work on.

Safety & hallucination control
of local open-source models.

Open and fine-tuned LLMs lag frontier closed models on safety by every measurable axis: jailbreak resilience, prompt-injection containment, factual grounding, refusal calibration. We close the gap at the runtime layer โ€” without modifying weights โ€” by reading the geometry of the model's own internal representations.

A
๐Ÿ›ก๏ธ

Pre-generation Safety Gate

16 latent-space centroids trained on adversarial corpora detect jailbreaks, prompt injection, and policy violations before the model is even called. AUROC 0.82 โ€” +79% over base-model refusals. Latency <5 ms.

B
๐Ÿง 

NSP Coherence Engine

Neural Symbolic Potentials. Reads the trajectory of the response in latent space and computes four geometric features โ€” max coherence, smoothness, jerk, context gap. Combined into a single hallucination score. AUROC 0.96. State-of-the-art on-premise.

C
๐Ÿ”

Agentic cascade containment

The cascade problem โ€” one hallucination corrupting an entire multi-agent pipeline โ€” is the single hardest open problem in agentic AI. Our research treats every agent hop as a checkpoint and applies per-hop scoring with cryptographic credit assignment.

Mechanistic explainability โ€”
MuPAX & EVIDENCE.

A regulator does not want a heatmap. They want a mechanistic, auditable explanation of why the system did what it did. We have authored two peer-reviewed methods that meet that bar โ€” both with mathematically guaranteed convergence, both shipping in G-1.

๐Ÿ“

MuPAX

Multidimensional Problem-Agnostic eXplainable AI

An N-dimensional, model- and loss-agnostic XAI methodology with mathematically guaranteed convergence. Validated on 1D audio, 2D image, 3D volumetric medical, and anatomical landmark detection. Unlike LIME / SHAP / GradCAM, MuPAX preserves โ€” even enhances โ€” model accuracy when masking, because it captures only the truly important patterns.

arXiv 2507.13090 Dentamaro ยท Pirlo 2025
Read on arXiv โ†’
๐Ÿงฌ

EVIDENCE

EVolutionary Independent DEtermiNistiC Explanation

A deterministic, model-independent method for extracting only the signals a black-box model recognizes as important. Mathematically proven convergence. Designed for time-varying signals (audio) and extensible to images, video, and 3D. Validated on COVID-19 audio diagnostics, Parkinson's voice recordings, and music classification โ€” outperforming LIME, SHAP, GradCAM on almost all metrics.

arXiv 2501.16357 EAAI 2025 Dentamaro ยท Pirlo
Read on arXiv โ†’
PRINCIPLE ยท 01
Convergence, not confidence

We only ship XAI methods with formal convergence guarantees. Heuristic post-hoc explanations are not court-quality evidence and we do not pretend otherwise.

PRINCIPLE ยท 02
Faithful to the model

Explanations are derived from the model's own internal states โ€” gradients, attention, hidden activations. No surrogate models. No rationalization layers stacked on top.

PRINCIPLE ยท 03
Useful at every speed

Three speed tiers in production: Occlusion (3โ€“8 s, fast scan), Integrated Gradients (60โ€“120 s, axiomatic), MuPAX (30โ€“180 s, court-quality).

A physical, geometric
world model.

GLAD-Manifold โ€” Geometric Learning of Action Dynamics on Riemannian Manifold โ€” is our next commercial product and the foundation architecture for the AI and reasoning systems that will follow the Transformer. It reinvents attention as a physical interaction field over a curved representation manifold: not a fixed similarity computation, but a context-adaptive geometry that the network learns end-to-end. The same machinery that gives it a strictly richer hypothesis class also gives it the ability to act as a world model โ€” to internalise the dynamics of action, state and consequence โ€” which is what the next wave of reasoning systems requires.

Geometry as a learned object.

Where conventional Transformers expose a single, rigid mechanism for token interaction, GLAD treats the very shape of that interaction as part of what the network learns. The result is a strictly richer hypothesis class with markedly stronger compositional and generalisation behaviour, while preserving the operational profile that production deployments depend on.

Three consequences matter most. The architecture's interaction surface adapts to the task, rather than the task adapting to the architecture. Pre-trained checkpoints from the conventional Transformer family transfer into GLAD without retraining. And โ€” most usefully for our applications โ€” the geometric structure exposes natural intrinsic loci where trust, safety, grounding and alignment can live as properties of the representation, not as filters bolted on top.

A curved manifold with adaptive interaction structure
โ—Œ
Adaptive geometry
interaction shape is learned, not fixed
โŸณ
Context-dependent
structure varies per inference
โŒ–
Riemannian foundation
curvature as a first-class object
โЇ
Strictly richer
super-class of standard attention
โ†ป
Native checkpoint transfer
no retraining required
โ—ˆ
Intrinsic safety surface
in representation space, not on output
PROPERTY ยท ADAPTIVE
A learned interaction surface

Rather than fixing the way tokens influence each other up front, GLAD lets that influence be shaped by the data and the context โ€” at every layer, every step. Capacity to express structure the conventional architecture cannot reach.

PROPERTY ยท COMPATIBLE
Compatible with the Transformer ecosystem

The conventional Transformer sits inside the GLAD family as a particular limit. Existing pre-trained checkpoints transfer in directly โ€” turning every model already in production into a candidate base for our trust layer.

PROPERTY ยท SAFETY-NATIVE
Alignment as geometry

The architecture's geometric structure exposes natural intrinsic positions for trust, safety and grounding mechanisms โ€” earlier signal, lower latency, mechanistically grounded inside the representation rather than stacked on the output.

From paper to product.
Three more in the lab.

G-1 is the first commercial output of the lab. Each subsequent product takes one of our research thrusts and ships it as enterprise-grade infrastructure.

2026 Q2
G-1 / Generally Available

The Trust Layer

Non-invasive runtime that wraps any open-source LLM via vLLM. Frontier-grade safety, hallucination control, full EU AI Act compliance. Live with enterprise design partners across financial services, insurance, and healthcare.

NSP Coherence Constitutional AI MuPAX ยท EVIDENCE ยท IG 13 frameworks
2026 H2
GLAD-Manifold / Closed Alpha ยท 2026 H2

Physical World Model โ€” the next product

A reinvention of the Transformer: physical, geometric, and built on a learned Riemannian manifold. Where the conventional Transformer fixes a single mechanism of token interaction, GLAD-Manifold lets the very geometry of that interaction be learned end-to-end โ€” yielding a strictly richer hypothesis class with markedly stronger compositional and reasoning behaviour, and the ability to act as a world model capturing the dynamics of action, state and consequence. The architectural foundation for the AI and reasoning systems that come after the Transformer. Closed alpha with research partners.

Physical world model Geometric attention Reasoning-native Backwards-compatible Closed alpha ยท 2026 H2
2027+
G-3 / Long-horizon research

Sovereign Reasoning Model on GLAD-Manifold

The natural commercial endpoint of the architecture: a frontier-class reasoning model trained natively on GLAD-Manifold, with safety, grounding and alignment expressed inside the representation space from the very first token. European-built. Sovereign-deployable. Aligned by construction. We will ship when the science is right.

GLAD-native Reasoning Sovereign Long-horizon

Where the science lives.

arXiv ยท 2507.13090 ยท 2025

MUPAX: Multidimensional Problem-Agnostic eXplainable AI

Vincenzo Dentamaro, Giuseppe Pirlo

N-dimensional, problem- and loss-agnostic XAI methodology with mathematically guaranteed convergence. Outperforms LIME, SHAP, GradCAM across audio, image, volumetric medical, and anatomical landmark tasks.

Read paper โ†’
arXiv ยท 2501.16357 ยท EAAI 2025

EVolutionary Independent DEtermiNistiC Explanation

Vincenzo Dentamaro, Giuseppe Pirlo

A deterministic, model-independent XAI method with mathematically proven convergence. Validated across COVID-19 audio diagnostics, Parkinson's voice recordings, and music classification.

Read paper โ†’
Want to read more?

Full publication list, talks, and reading recommendations from the team. We collaborate with researchers globally โ€” please reach out.

Contact research โ†’

Talk to the lab.

For VCs, research collaborators, and customers who want to evaluate the underlying science before adopting the platform.