Azetta Manifesto
The Aetherial State of AI
Physics was once constrained by the concept of the "luminiferous aether" as a medium for light, a theory that failed to align with experimental evidence. Albert Einstein's 1905 theory of Special Relativity rendered the aether concept unnecessary, providing a new paradigm.
AI is facing a similar conceptual impasse, relying on foundational assumptions that obscure a more fundamental understanding and await a similar breakthrough.
The Blackbox Dilemma
Optimizing for immediate utility creates unexplainable components, leaving critical questions unanswered:
- What is information?
- How is it structured and formed?
- How can it be quantitatively measured, identified, and compared?
- How can it be controlled and, ultimately, learned?
At Azetta.ai, we have answered these questions. Our internal systems are built entirely on these first principles. Each component is interdependent and originates from our fundamental research into the nature of information and intelligence.
Current AI development often deviates from this, building upon opaque, ambiguous, and opinionated research. The field has prioritized "making it work" heuristically over "building it right" from fundamentals.
The Promise and Its Betrayal
The promise of AI began with a simple idea: extend human curiosity by sharing methods, datasets, and ideas in the open.
The world we inhabit now looks different. High-compute budgets create paywalls, sealed APIs hide reasoning, and data is harvested with memorisation-heavy tricks no one can examine.
That shift turned intelligence into a luxury. Access flipped, pricing out builders and silencing the communities that need these tools the most.
Those pressures make verification rare, keep most of the world's languages outside the loop, and ask institutions to trust black boxes they can neither audit nor appeal.
Our Response
Our response is clear. Intelligence is infrastructure, so every training run, dataset choice, and architectural decision must be documented, explainable, and reviewable. Cost must fall, so we design for CPU-first deployment, modest hardware, and transparent pricing. Knowledge must be omnilingual, so our models speak across cultures without translation tolls. And models must be whitebox by law—if a weight, gradient, or explanation cannot be inspected, it has no place in production.
We build from first principles, where every decision traces back to fundamental questions about information itself. We reject the heuristic shortcuts that create black boxes, choosing instead the rigor of explainable foundations.
We end with the same invitation that drove the early promise: builders, investors, and lawmakers must lower barriers, legislate whitebox requirements, insist on omnilingual reach, and demand logs, proofs, and plain language from every vendor. Together we keep AI open, democratic, and worthy of public trust.