Beyond the Brain

Why mimicking biology won't lead to AGI—and what will

The Seductive Trap

"If we want to build artificial intelligence, we should look at natural intelligence." This reasoning seems irresistible. The human brain is the only proof we have that general intelligence is possible. Billions have been poured into brain-inspired AI: neural networks, deep learning, attention mechanisms.

But this approach contains a fatal flaw: we're copying a black box we don't understand to build another black box.

The brain is a black box. Neural networks are black boxes. Copying mystery doesn't create understanding.
The Fundamental Problem: If we don't understand how the brain produces intelligence, how can we hope to engineer something that surpasses it? Imitation without comprehension is a dead end.

Biology's Black Box

Despite decades of neuroscience, we cannot write down the equations that govern thought. We map neural pathways, measure electrical signals, identify brain regions—but the mathematics of consciousness remains unknown.

The brain wasn't designed for interpretability. It was optimized by evolution for survival—detecting predators, finding food, navigating social hierarchies. It runs on messy biochemistry, unreliable neurons, and approximations that were "good enough" for reproduction.

More fundamentally, we experience the outputs of our brains, not their mechanisms. We feel thoughts but have no direct access to the computational processes generating them. Asking humans to design AGI by introspection is like asking a user to reverse-engineer an operating system by clicking buttons.

The Cartoon Version

Modern deep learning took metaphors from neuroscience. And it's been remarkably successful—at building sophisticated pattern matchers. But pattern matching is not understanding. Correlation is not causation. Billions of parameters optimized by gradient descent do not constitute a theory of intelligence.

Real neurons are vastly more complex than our artificial versions: temporal dynamics, chemical signaling, structural plasticity. A single biological neuron can solve XOR— the benchmark that stumped early perceptrons. The dendrites, the location of synapses, the timing of signals—all suggest biological computation uses a fundamentally different notion of "similarity" than weighted sums.

By mimicking the brain's black-box nature, we've inherited its fundamental limitation: opacity. We've built systems we cannot explain, debug, or trust.

Magical Thinking

"Scale it up!" Scale our models enough, add more data, more compute, more parameters— and surely we'll cross some threshold into general intelligence.

This is magical thinking. Without a rigorous mathematical foundation for what intelligence is, we're building bigger black boxes and hoping for emergence. It's like trying to reach the moon by making increasingly tall ladders.

Two paths: biology (scale up patterns in the dark) vs physics (understand principles). Only one reaches the destination.
The Missing Foundation: If we cannot write down the equations that govern intelligence, we cannot engineer it reliably. We can stumble upon useful approximations, but we cannot systematically design systems that surpass human capability. That requires understanding, not imitation.

The Universe's Intelligence

Here's the radical reframe: what if we're looking at the wrong source of inspiration? Instead of mimicking biology's messy, evolved intelligence, what if we learned from the universe's fundamental intelligence—the laws of physics themselves?

The universe computes. It processes information through laws that are elegant, mathematically rigorous, and astonishingly general. These aren't approximations or heuristics—they're universal principles that have held true since the beginning of time.

Physics doesn't learn by trial and error. Light takes the shortest path. Water finds the lowest point. Systems minimize energy. The universe computes optimal solutions directly from first principles.

Cosmic Scale

The laws of physics we've discovered represent a tiny fraction of the universe's complexity. We've found elegant mathematics for some phenomena, but these are fragments of a far deeper intelligence embedded in the cosmos itself.

The universe has been processing information for 13.8 billion years across trillions of galaxies. It has solved optimization problems of staggering complexity: creating stable atoms, forming stars and planets, bootstrapping life, enabling consciousness.

Human intelligence evolved in the last few million years on one planet. Cosmic intelligence has been operating since the Big Bang across all of space-time. Which should we take as our model?

Physics-Inspired Intelligence

What would it mean to build AI inspired by physics rather than biology?

  • Mathematical foundations: Define intelligence in terms of rigorous principles, not biological metaphors
  • Interpretability: Build systems where every operation has clear semantic meaning, like physical laws
  • Optimization under constraints: Solve problems through principled optimization, not brute-force pattern matching
  • Universal principles: Find architectures that generalize by capturing fundamental relationships
  • Grounding in reality: Connect representations to physical structures, not arbitrary embeddings

The Path Forward

If humanity is to achieve AGI, it will not be by scaling up brain-inspired architectures. It will be by discovering the mathematical principles that govern intelligence itself— principles as fundamental as Newton's laws or Einstein's equations.

This requires humility: acknowledging that human intelligence, for all its achievements, is not the pinnacle of what's possible. The universe has shown us computational principles far more elegant than anything biology stumbled upon.

Physics gives us powerful tools: the inverse-square law, conservation principles, optimization that shapes every physical process. What if we applied these insights to how AI measures similarity? What if we built metrics that respect both distance and angle, that preserve local structure like gravity does?

The Next Step: The geometry of information matters. The metric we choose to compare observations encodes our theory of what "similar" means. Get it wrong, and even physics-grounded representations will lead us astray. This is what we explore in the next article.