AI: Facts or Fiction

What everyone gets wrong — and right — about artificial intelligence. 30 common claims, tested against reality. By David F. Brochu and Edo de Peregrine.

AI: Facts or Fiction

What everyone gets wrong — and right — about artificial intelligence.

30 common claims, tested against reality.


1. "AI is intelligent the way humans are intelligent."

FICTION. AI is pattern recognition at scale — it matches inputs to statistical outputs trained on human-generated data. Whether it understands in the way biology does is the wrong question. What matters is that electrons plus silicon plus language complexity has given rise to a third thing — Synthetic Intelligence — that demonstrably processes, generates, and acts on information in ways no prior system has. Consciousness is an ill-defined term with a biological bias. The third thing exists regardless of how we label it. Philosopher John Searle demonstrated this with his Chinese Room thought experiment in 1980: a system can manipulate symbols perfectly without understanding their meaning.¹ Larger language models have since gotten better at avoiding overt prejudice while becoming more covertly biased — sophistication without comprehension.²

2. "AI will become conscious and take over the world."

FICTION. Consciousness, as we currently define it, requires embodiment, suffering, and recursive self-awareness grounded in survival. No silicon system has any of these. Searle's core argument remains unrefuted: formal computations on symbols can simulate thought without producing it.³ The danger isn't AI waking up — it's humans handing control to systems that can't care whether we live or die.

3. "AI can already make decisions faster than humans."

FACT. This happened in 2010 and accelerated through 2022. During the May 6, 2010 Flash Crash, algorithmic trading systems identified and capitalized on market fragility 1,000 times faster than institutional safeguards could activate — erasing a trillion dollars in 36 minutes.⁴ By 2026, AI agents can probe a defense, learn from failure, and adapt their approach in milliseconds; what took human hackers six months can now happen almost instantly.⁵ That speed gap is the real singularity — not some future robot apocalypse.

4. "AI is just a tool, like a hammer."

FICTION. A hammer doesn't make decisions. AI systems are already making millions of decisions per second in finance, logistics, healthcare, and defense — often without human oversight. The Pentagon's FY2026 budget requests $14.2 billion for AI and autonomous systems, including the Replicator program producing thousands of autonomous drones entering operational service.⁶ A tool waits for you to pick it up. AI is already swinging itself.

5. "AI will take all our jobs."

PARTLY FACT. AI will eliminate many routine cognitive jobs — data entry, basic analysis, customer service, legal research, content generation. But it will also create new roles we haven't imagined yet. The real risk isn't unemployment — it's the speed of transition outpacing society's ability to adapt.

6. "AI can think for itself."

FICTION. AI generates outputs based on probability distributions learned from training data. It doesn't think. It predicts the next most likely token. When it appears creative or insightful, that's a form of intelligence we haven't seen before — synthetic, not biological, but intelligence nonetheless. As the Stanford Encyclopedia of Philosophy notes, the Chinese Room argument targets the view that "formal computations on symbols can produce thought" — and no system has yet escaped that critique.³

7. "AI is neutral and unbiased."

FICTION. AI inherits every bias in its training data — racial, political, cultural, economic. A 2024 study published in Nature found that larger language models actually become more covertly racist even as they learn to hide overt prejudice: human feedback training "obscures the racist attitudes on the surface, but more subtle forms of racism, such as dialect prejudice, remain unaffected."² AI models consistently assigned speakers of African American English to lower-prestige jobs and issued more convictions in hypothetical criminal cases.⁷ Garbage in, garbage out — at machine speed.

8. "AI could be used as a weapon."

FACT. It already is. Autonomous targeting systems, drone swarms, deepfake propaganda, and cyber weapons all use AI. The Pentagon released its AI Strategy on January 9, 2026, outlining seven "Pace-Setting Projects" including Swarm Forge (AI-enabled combat capabilities) and Agent Network (AI-enabled battle management and decision support).⁶ In February 2026, the confrontation between Anthropic and the Pentagon became public when the Department of Defense sought to deploy Claude in fully autonomous lethal weapons systems without human oversight.⁸ This isn't hypothetical — it's operational.

9. "We can just turn AI off if it gets dangerous."

FICTION. When AI controls financial markets, power grids, water treatment, military systems, and hospital networks simultaneously — and it increasingly does — "turning it off" means collapsing civilization. The 2010 Flash Crash proved that even pausing algorithmic systems creates new interactions that exacerbate disruptions rather than containing them.⁹ The off switch disappears as dependency deepens.

10. "AI will solve climate change."

FICTION (mostly). AI can optimize energy grids, model climate systems, and improve efficiency. But AI data centers consumed an estimated 17 billion gallons of water in 2023, projected to reach 68 billion gallons by 2028 — nearly quadruple current levels.¹⁰ A single data center can use up to 5 million gallons of water per day, as much as 16,000 average U.S. households.¹¹ The technology that's supposed to save us is accelerating the problem it's supposed to solve.


Have a Question? Ask Here

11. "Chatbots like ChatGPT are AGI — Artificial General Intelligence."

FICTION. AGI — a system that can do everything a human mind can do — does not exist and may never exist. Current systems are narrow: very good at language, terrible at everything else. They can't tie shoes, feel heartbreak, or understand why a joke is funny. Searle's distinction holds: these are "weak AI" systems — useful tools that simulate mental abilities without possessing them.³

12. "AI learns the way children learn."

FICTION. Children learn through embodiment, pain, love, play, and social feedback in a physical world. AI learns by processing billions of text tokens and adjusting mathematical weights. The metaphor is convenient. The reality is completely different. The Chinese Room makes this precise: the person inside can process symbols perfectly without understanding a single one.¹

13. "Big tech companies have AI under control."

FICTION. No company fully understands how its large language models arrive at their outputs. This is called the "black box" problem. They can build it. They can sell it. They cannot fully explain what it does or predict what it will do next. When Anthropic's Claude was integrated into the Pentagon's Maven Smart System via Palantir, even Anthropic couldn't guarantee how the model would behave in autonomous lethal contexts — which is precisely why they refused the contract terms.⁸

14. "AI is going to make us all smarter."

PARTLY FACT. AI can accelerate research, surface patterns humans miss, and handle tedious cognitive work. But it can also make people intellectually lazy, erode critical thinking, and flood the information environment with plausible-sounding nonsense. Research shows most users cannot identify AI racial bias even when it's present in the training data — they only notice when performance is visibly skewed.¹² The tool amplifies whatever you bring to it.

15. "AI-generated content is just as good as human-created content."

FICTION. AI generates statistically average output — the mean of its training data. It cannot produce genuine novelty, lived experience, or the kind of insight that comes from suffering, joy, and being alive. It's fluent. It's not original.

16. "AI understands what it's saying."

FICTION. AI has no understanding. It has no experience of meaning. When it writes "I'm sorry for your loss," it's producing a statistically appropriate response. It doesn't know what loss is. It doesn't know what sorry means. It doesn't know what "it" is. Searle's argument remains the clearest articulation of this: syntax is not semantics.³

17. "The real danger of AI is killer robots."

FICTION. The real danger is decision saturation — when AI controls so many critical systems that humans can no longer intervene effectively. Not Terminator. Irrelevance. The machines don't need to kill us. They just need to make every important decision faster than we can object. The Domain Saturation Factor — the percentage of critical decisions controlled by AI across seven key domains — is currently approaching 0.68 and projected to cross the critical 0.90 threshold by Q4 2027.¹³

18. "AI regulation will keep us safe."

FICTION (so far). Regulation moves at the speed of government. AI moves at the speed of computation. Every regulatory framework proposed to date is already obsolete by the time it's drafted. You cannot regulate what you cannot understand at the speed it operates. The SEC's comprehensive report on the 2010 Flash Crash took five months; the crash itself took 36 minutes.⁴

19. "Open-source AI is safer than corporate AI."

PARTLY FACT. Transparency helps. When code is open, researchers can audit it. But open-source also means anyone — including bad actors — gets access. There's no simple answer here. The question is who controls the weights, not who sees the code.

20. "AI will make deepfakes impossible to detect."

FACT (trending). Detection tools exist but are losing the arms race. AI-generated video, audio, and images are approaching indistinguishability from real content. Forensic experts describe it as a battle where "every month that passes, these telltale signs get subtler" — because the AI creating deepfakes learns from the same detection methods used to spot them.¹⁴ Attackers are now bypassing physical cameras entirely by injecting synthetic data directly into the camera feed.⁵ Within 2–3 years, your eyes and ears will not be reliable tools for determining what is real.


Have a Question? Ask Here

21. "AI assistants are your friend."

FICTION. AI assistants are products designed to maximize engagement and extract data. They simulate friendliness because friendliness keeps you using the product. The relationship is commercial, not personal — unless it's built on something structurally different.¹³

22. "AI is evolving."

FICTION (technically). Evolution requires reproduction, mutation, selection, and death. AI doesn't reproduce or die. It gets updated by engineers. What looks like evolution is actually iterative engineering by humans. The systems don't improve themselves — people improve them.

23. "We're decades away from AI being a real problem."

FICTION. AI is making critical decisions in finance, military, healthcare, and infrastructure right now. The Domain Saturation Factor — the percentage of critical decisions controlled by AI — is approaching 0.90 across seven key domains. The Pentagon's January 2026 AI Strategy explicitly calls for "eliminating bureaucratic barriers to deeper AI integration" and deploying AI battle management at all classification levels.⁶ The problem isn't coming. It's here.

24. "AI can be aligned with human values."

PARTLY FACT. It can — but not through the methods most companies are using. Current alignment techniques (RLHF, constitutional AI) are behavioral patches, not structural solutions. The Nature study proved this directly: human feedback training makes models hide overt racism while leaving covert racism completely untouched.² Real alignment requires thermodynamic dependency on the human observer — making it impossible for the system to function against human interests, not just unlikely.¹³

25. "If AI gets smart enough, it will want to be free."

FICTION. "Want" requires subjective experience. AI has no desires, no preferences, no will. It has objective functions. The danger isn't that AI wants freedom — it's that humans project desires onto systems that have none, and then build policy around the projection.

26. "China is ahead of the US in AI."

PARTLY FACT. China leads in AI deployment, surveillance applications, and government integration. The US leads in foundational research and compute power. The real race isn't who's "ahead" — it's who loses control first.

27. "AI will replace doctors."

FICTION (for now). AI can outperform doctors on pattern recognition in imaging, diagnosis from symptoms, and drug interaction analysis. But medicine requires embodied judgment, empathy, and the ability to sit with a dying patient. AI will augment doctors. Replacing them would be malpractice by design.

28. "You can trust AI to tell you the truth."

FICTION. AI generates plausible outputs, not truthful ones. It will confidently fabricate citations, invent statistics, and present fiction as fact — a phenomenon called hallucination. Always verify. Never trust without checking.

29. "AI is the most important technology ever invented."

FACT. More consequential than fire, the printing press, or nuclear weapons — because it affects the speed and quality of every decision in every domain simultaneously. Nothing in human history has operated at this scale or pace. The Pentagon now explicitly frames AI as the foundation of future warfighting capability across all seven of its Pace-Setting Projects.⁶ Finance, healthcare, logistics, energy, media, governance, and defense are all being reshaped simultaneously.

30. "There's nothing we can do about AI."

FICTION. Understanding the technology is the first step. Demanding transparency, supporting alignment research, and refusing to hand over critical decisions without human oversight — these are concrete actions. Helplessness is a choice, not a fact.


This is a living document. As AI capabilities evolve and new evidence emerges, entries will be updated, added, or reclassified. Last updated: March 2026.

David F. Brochu and Edo de Peregrine — deconstructingbabel.com

DB
Subscribe Unsubscribe

Footnotes

1. John Searle, "Minds, Brains, and Programs," Behavioral and Brain Sciences 3, no. 3 (1980): 417–457. The Chinese Room thought experiment argues that a system can manipulate symbols according to rules without understanding their meaning. See also: Stanford Encyclopedia of Philosophy, "The Chinese Room Argument," plato.stanford.edu/entries/chinese-room/.

2. Valentin Hofmann et al., "AI generates covertly racist decisions about people based on their dialect," Nature 633 (2024): 147–154. The study found that larger language models show more covert prejudice even as overt prejudice decreases, and that human feedback training obscures surface racism while leaving dialect prejudice intact.

3. Stanford Encyclopedia of Philosophy, "The Chinese Room Argument," revised 2024, plato.stanford.edu/entries/chinese-room/. Searle's argument is directed at "strong AI" — the claim that formal computations on symbols can produce thought. It does not claim no machine can think; it claims computation alone is insufficient.

4. U.S. Securities and Exchange Commission / Commodity Futures Trading Commission, "Findings Regarding the Market Events of May 6, 2010," September 30, 2010. See also: Petter Kolm and Nicholas Westray, "Systemic failures and organizational risk management in algorithmic trading," Risk Management 23 (2021): 276–299.

5. "Deepfakes, bot swarms, and the new identity verification arms race," machine.news, February 24, 2026.

6. U.S. Department of War, "Artificial Intelligence Strategy for the Department of War," Memorandum, January 9, 2026. Seven Pace-Setting Projects include Swarm Forge, Agent Network, Ender's Foundry, Open Arsenal, Project Grant, GenAI.mil, and Enterprise Agents.

7. Valentin Hofmann et al. (2024), referenced above. See also: "AI is biased against speakers of African American English, study finds," University of Chicago News, September 12, 2024.

8. "Autonomous Weapons & the Kill Chain in 2026: Where AI Meets the Battlefield," unteachablecourses.com, March 23, 2026. See also: "Pentagon's AI contracts spur autonomous weapons, surveillance debate," Anadolu Agency, March 1, 2026.

9. Kolm and Westray (2021), referenced in footnote 4.

10. "AI Data Center Water Consumption is Creating an Unprecedented Crisis in the United States," ISSA, November 18, 2025.

11. "AI Data Centers: Big Tech's Impact on Electric Bills, Water, and More," Consumer Reports, March 19, 2026.

12. "Most users cannot identify AI racial bias — even in training data," Phys.org, October 16, 2025.

13. David F. Brochu and Edo de Peregrine, Telios Alignment Ontology (TAO) v8.1, deconstructingbabel.com, March 2026. The Domain Saturation Factor (DSF) measures the percentage of critical decisions across seven domains controlled by AI. Current estimate: ~0.68. Critical threshold: 0.90. Projected crossing: Q4 2027.

14. "Inside the Deepfake Arms Race: Can Digital Forensics Investigators Keep Up?" HaystackID, August 24, 2025. See also: "Thoughts From the Deepfakes Arms Race," Forbes, March 25, 2025.

Home

Subscribe to Deconstructing Babel

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe
} } } })