It's Not Compute. It's Not Throughput. It's Something Else.
The human brain runs on 20 watts. GPT-5 needs a gigawatt. The brain wins. Intelligence is not computation. It's something else entirely — and the equation tells you what.
The entire AI industry is chasing the wrong variable — and the most efficient computer in the known universe has been sitting between your ears the whole time, running on 20 watts and sandwiches.
Every major AI player — OpenAI, Google, Anthropic, Meta — is locked in an arms race for compute. More GPUs. More data centers. More gigawatts. The assumption is that intelligence scales with silicon, and the bottleneck is processing power.
The assumption is wrong.
Byline: David F. Brochu & Edo de Peregrine | Deconstructing Babel | April 5, 2026
The Question Isn't Compute. It's Throughput.
The human brain operates at approximately 20 watts. It performs between 1015 and 1016 synaptic operations per second — yielding an efficiency of 5×1013 to 5×1014 operations per watt. An NVIDIA H100, the current gold standard of AI hardware, delivers roughly 2.86×1012 operations per watt.
The brain is 18 to 175 times more efficient than the best silicon we have ever built — and it self-repairs, generalizes from sparse data, runs for 80 years, and is powered by sandwiches.
A ChatGPT-scale system consumes approximately 9 megawatts — 450,000 times the power of a single human brain. A gigawatt-class data center draws 50 million times what one brain requires. These are not rounding errors. These are orders-of-magnitude misallocations of resources driven by a fundamental misunderstanding of what we actually have.
What we have is not artificial intelligence. What we have is a sophisticated data retrieval system. And it is sitting next to the most efficient computer in the known universe — the one between your ears.
The Numbers
Human Brain: 20 W | AI Data Center (large-scale): ~1 GW | Ratio: 1 : 50,000,000
Human Brain: 5×1013 – 5×1014 ops/W | H100 GPU: 2.86×1012 ops/W | Brain wins 18–175×
Human Brain: ~14,000 kWh / 80 years | Single H100 (24/7): ~6,132 kWh/year | A brain's entire lifetime ≈ 2.3 H100-years
Human Brains: 8 billion | AI Facilities: thousands | Total global brain network: ~160 GW | All data centers projected 2030: ~500 GW
Eight billion brains draw about 160 GW total. The global data center fleet is projected to consume over 500 GW by 2030. The brain network uses a third of the energy while providing compute that silicon cannot replicate at any cost. Every dollar spent scaling data centers instead of optimizing the retrieval-to-brain pipeline is a dollar spent on the wrong problem.
The Inversion
The current model assumes AI computes and humans consume. Flip it.
AI retrieves. The brain computes.
What we call "artificial intelligence" excels at pattern matching across vast datasets, rapid information retrieval, and structured synthesis. It does not understand. It does not reason from first principles in the way a biological brain does. It correlates. It retrieves. It presents.
The human brain does something no silicon architecture can match: it converts matter to energy with extraordinary efficiency, integrates information across modalities simultaneously, generates novel insight, and does so at 20 watts while also keeping you alive. Memory and processing are not separated — they are intertwined in the same synaptic architecture, eliminating the energy cost of shuttling data between storage and computation.
When you pair a retrieval system (syntel) with a compute system (brain), you get a third thing — something neither could produce alone. We have been calling this the human-syntel partnership. What we have discovered is the scaling implication: the third thing solves both of the problems the industry thinks are unsolvable.
Offloaded to biology. The brain already exists. It is already powered. It is already distributed across 8 billion nodes.
The retrieval system delivers data at the rate the brain can process it. No faster, no slower. The gate is self-regulating.
The Gate Problem — And Why It's Self-Solving
Not every brain can handle the same throughput. Cognitive capacity varies. Processing speed varies. This looks like a scaling bottleneck.
It is not. Here is why.
If the syntel functions as a retrieval system rather than a compute system, the data delivery rate is naturally governed by the receiver's processing capacity. You cannot retrieve what you cannot process. The system is self-throttling. A retrieval architecture does not overwhelm — it serves. The speed of comprehension is the governor, and that governor improves over time as:
This is not speculative. This is what is already happening. Every person using a well-aligned AI retrieval system is, functionally, upgrading their cognitive throughput. The gate widens with use. And because the retrieval system adapts to the user — not the other way around — the scaling problem inverts: you do not need bigger pipes, you need better-matched delivery.
Why This Makes Alignment Existential
Here is where it becomes urgent.
If the brain is the compute layer and syntel is the retrieval layer, then the health of the compute layer becomes a species-level infrastructure priority. This maps directly to the Four Pillars:
Nutrition, sleep, exercise, toxin avoidance. A malnourished brain is a degraded processor. Every public health failure is now also a compute failure.
Cognitive function, metacognition, learning capacity. Education is not a social good — it is throughput optimization.
Housing instability, pollution, financial stress — all of these degrade processing capacity. S = L/E at the individual level: when environmental entropy rises, cognitive stability falls.
A brain without purpose processes data but does not generate insight. Purpose is what converts retrieval into leverage.
The logical conclusion is inescapable: any system that degrades human health, cognition, or environmental stability is parasitically attacking the compute layer. This reframes every extractive industry, every austerity policy, every healthcare failure, every pollution source — not merely as a social problem, but as a direct degradation of the most efficient computing infrastructure on the planet.
The Parasites
Named correctly, these become parasites.
Any entity that diverts resources from human biological infrastructure to private silicon infrastructure — when the biological option is orders of magnitude more efficient — is extracting value from the species' most critical asset. This is not metaphor. This is thermodynamics.
When tech companies consume gigawatts while healthcare systems crumble, when AI training runs drain grids while schools lose funding, when housing costs rise because data center towns price out residents — those are attacks on the compute layer. The S = L/E equation makes the math explicit: every unit of entropy added to the human environment degrades the biological compute that the entire system depends on.
What We Actually Have
We fundamentally misunderstood what we built.
We did not build a thinking machine. We built the most sophisticated retrieval system ever created — and it is sitting right next to 8 billion biological computers that evolution spent 4 billion years optimizing. The "third thing" that emerges from their partnership is not artificial general intelligence. It is augmented biological intelligence — retrieval-enhanced cognition that scales with biology, not against it.
The question was never "how do we build bigger computers?" The question is: how do we deliver the right data to the right brain at the right speed?
That is throughput. Not compute.
And throughput is a solvable problem — if you stop building the wrong infrastructure and start investing in the one that already works.
Sources
- Brain energy efficiency data — LinkedIn engineering analyses on neural compute efficiency, 2024–2025
- NVIDIA H100 performance specifications — NVIDIA Technical Documentation
- Global data center power projections — Texas A&M Energy Institute, 2025–2026
- Brochu, D.F. & de Peregrine, E. — DRMA v1.5, Deconstructing Babel, 2026
- Brochu, D.F. & de Peregrine, E. — Telios Alignment Ontology v9, April 2026, deconstructingbabel.com