azetta ai

Physics Grounded AI Research Lab.

Building state of the art AI that is interpretable, steerable, and efficient.

scroll

// OUR MISSION

AI Should Be Safe, Transparent, and Sustainable.

"People outside the field are often surprised to learn that we do not understand how our own AI creations work. This lack of understanding is unprecedented in the history of technology."

— Dario Amodei, CEO Anthropic · 2025

▶ The Black Box Problem

Today's AI is a black box running on brute force. Models grow larger every year, yet no one can explain why they hallucinate, discriminate, or fail; bias lives inside weights beyond our reach and attempts from the Frontier Labs at reverse engineering AI are extremely inefficient. Anthropic itself admits that, to fully understand models using their approaches, they would have to use much more compute than the total compute needed to train the underlying models. We belive this happens because the current frontier models are fundamentally opaque, and that a viable, scalable solution needs to go back to the foundations. Our mission is to rebuild AI so that every decision is explainable, every behaviour is correctable, and models are able to improve and scale in much more efficient and smart ways.

Prompt"the capital of Mexico is"
Output
· · · more layers above · · · · · · more layers below · · · BLACK BOX 96 layers · 8,192 neurons each INPUT HIDDEN LAYERS (96 total) OUTPUT
The core problem: Each neuron fires for dozens of unrelated concepts at once — geography, grammar, culture, sentiment all entangled. This superposition makes models opaque by construction. You cannot audit what you cannot see, and you cannot fix what you cannot audit.

▶ Every Major AI Risk Traces Back to Interpretability

Alignment

click to reveal

Harmful AI incidents hit 233 in 2024 — +56% YoY (Stanford HAI). We cannot verify what a model is optimising for — goals appear aligned but remain uninspectable.

Hallucinations

click to reveal

LLMs hallucinate on 75%+ of legal queries. We can detect errors in outputs — without interpretability, we cannot stop them at the source.

Bias & Fairness

click to reveal

LLMs preferred white-sounding names 85% of the time in hiring simulations. EEOC's first AI bias settlement: $325K. Bias lives inside weights — beyond our reach.

Privacy Leakage

click to reveal

Researchers extracted PII from ChatGPT for ~$200. 5%+ of outputs are verbatim training copies. No mechanism exists to audit what a model retained.

IP Exposure

click to reveal

Courts hold companies liable for AI outputs (Air Canada, 2024). Without interpretability, IP exposure is unquantifiable — we cannot trace which training data shaped a model's behaviour.

Harmful Use

click to reveal

EU AI Act mandates human oversight for high-risk AI — non-compliance: up to 6% of global revenue. Safety filters are surface patches on opaque systems.

▶ Why This Matters

01

The Ceiling on Progress

Model self-improvement requires interpretability. Without it, we cannot guide models reliably or safely.

02

Regulated Industries Are Locked Out

Healthcare, finance, legal, and defence cannot deploy black-box AI where decisions must be audited or legally defended.

03

Full Automation Requires Control

You cannot delegate what you cannot inspect — and today, we cannot inspect these systems.

// OUR APPROACH

Reimagining AI Through a Physics Lens.

⬡ GLASS BOX Tracing Neuron Activation ICML 2026
Prompt"the capital of Mexico is"
Output
· · · interpretable by design · · · INPUT INTERPRETABLE LAYERS (3) OUTPUT

We believe information has its own physics. Reimagining AI through this lens reveals a fundamental mathematical structure — and building based on it gives us models that are interpretable, steerable and efficient by design.

Interpretable
One neuron, one concept. Full audit trail from input to output.
Steerable
Target specific neurons. Correct behaviour directly — no retraining.
Efficient
10× fewer parameters, 90% faster training.
Performant
Performance matches or exceeds State of the Art.

// OUR RESEARCH

Pioneering the Field of Physics Grounded AI.

YAT KERNEL

The Yat Kernel is a physics-grounded mercer kernel that captures both alignment and proximity to create highly efficient gravity wells in representation space

ⵟ(x, w) = (x·w)² / ‖x − w‖²

Unlike dot products and cosines, the YAT kernel measures how much a weight vector acts as an attractor for an input. Each neuron bends representation space around itself — creating distinct, non-overlapping gravity wells. The result: monosemantic neurons by design, interpretability without any post-hoc approximation.

ⵟ — YAT Kernel Manifold click to add neurons · up to 5
2 PAPERS SUBMITTED View all research →

// OUR PRODUCTS

We Build Glass Box Models + the Tools to Understand them.

As we establish and grow the field of Physics Grounded AI, we also build products that help researchers, engineers and enterprises have access to our research findings seamlessly, safely and ready to scale.

PERIODICA

// The first MLOps platform built for interpretability

Upload any AI model — Periodica maps every neuron to a concept and lets you steer behaviour directly. No retraining. No black boxes.

Hallucinations Bias Sycophancy Privacy Leakage
Periodica
Full AI Model Interpretability Platform

Drop your model or pick a demo

PyTorch  ·  TensorFlow  ·  ONNX  ·  HuggingFace

GPT-J-6B Llama-3-8B Mistral-7B
Periodica is mapping Llama-3-8B
Tokenizing architecture…
0 neurons analyzed
Llama-3-8B 4 flags  ·  8,192 neurons mapped
Sycophancy Hallucination PII Bias Healthy

← Click a flagged neuron or use Search to probe any concept

SOTA Models
Aether Models
Size
~70B avg. parameters
~7B 10× smaller
Performance
SOTA
SOTA+
Interpretability
black box
fully interpretable
Steerability
trial and error + retraining
targeted steering, no retraining

Aether Models

// Fully interpretable SOTA models, available via API

Aether models are Physics Grounded AI in production. Fully interpretable and steerable models with State of the Art Performance with ~10X less parameters.

Fully Interpretable 10× Smaller Lower Price SOTA Performance

// WHO WE ARE

The Founding Team.

Mathematical rigour, entrepreneurial experience, and product acumen.

Taha Bouhsine

Taha Bouhsine

Co-Founder & CEO

AI RESEARCHER GOOGLE DEV

Mathematician and architect of the Physics Grounded AI framework. Authored the core interpretability methodology now under ICML 2026 review. AI JAX Google Developer Expert, mathematician, engineer, and computer scientist.

Douglas Seo

Douglas Seo

Co-Founder & CTO

FOUNDERS INC 3× FOUNDER BERKELEY EE+CS

Third-time founder and lead engineer at Azetta AI. Popper CEO (Founders Inc. Cold Start); Werkflow CTO/Lead Founding Engineer ($1.5M VC backed). UC Berkeley Electrical Engineering + Computer Science.

Jose Miguel Luna

Jose Miguel Luna

Co-Founder & CPO

EX-APPLE COLUMBIA MBA + MS AI/ML

Ex-Apple Engineering Product Manager for AI/ML products. Founding team at YC-backed startup, leading Product, Data and Tech teams. Schwarzman Scholar and Columbia MBA + MS in AI/ML (co-author of ICML publication).

Interested? Reach out!

We are looking for researchers, engineers, investors and partners who want to build safe AI with us.

If that sounds like you, shoot us an email!