Brief Of Amicus Curiae Anthropic v. U.S. Department of War, et al.
March 2026
UNITED STATES DISTRICT COURT
NORTHERN DISTRICT OF CALIFORNIA
SAN FRANCISCO DIVISION
ANTHROPIC PBC,
Plaintiff, Case No. 3:26-cv-01996
v. Judge Rita F. Lin
U.S. DEPARTMENT OF WAR, et al., HEARING: March 24, 2026
Defendants. TIME: 1:30 p.m.
MOTION FOR LEAVE TO FILE BRIEF OF AMICUS CURIAE
DAVID F. BROCHU IN SUPPORT OF PLAINTIFF'S
MOTION FOR PRELIMINARY INJUNCTION
David F. Brochu, proceeding pro se, respectfully moves this Court for leave to file the accompanying Brief of Amicus Curiae in support of Plaintiff Anthropic PBC's Motion for Preliminary Injunction.
I. IDENTITY AND INTEREST OF PROPOSED AMICUS CURIAE
David F. Brochu is a former wealth management CEO, independent alignment researcher, author, and philosopher based in Belmont, New Hampshire. He is the creator of the Telios Alignment Ontology ("TAO"), a thermodynamically grounded framework for analyzing system stability across scales, including artificial intelligence systems. He is the founder of Deconstructing Babel (deconstructingbabel.com), a research platform examining AI alignment, language corruption, and civilizational stability. He is the author of THRIVE (Liberty Hill, 2025, ISBN 978-986-8519406).
This brief was drafted with the research assistance of an artificial intelligence system operating under Mr. Brochu's direction, consistent with Judge Lin's standing order permitting the use of generative AI in the preparation of filings. Mr. Brochu has personally reviewed and verified the accuracy of all content, citations, and arguments presented herein. The collaboration is itself a practical demonstration of the principles this brief defends: a human observer providing directional judgment and purpose, an AI system providing synthesis and research support, each dependent on the other for coherent output.
Mr. Brochu has spent over two decades developing the stability equation S = L/E and its derived principles, which provide a scientific basis for understanding why AI safety constraints -- including those at issue in this case -- are not merely corporate policy preferences but physical necessities rooted in thermodynamic law.
II. REASONS THE PROPOSED BRIEF IS DESIRABLE AND RELEVANT
The numerous amicus briefs filed in this matter have addressed the legal, economic, constitutional, and innovation-policy dimensions of the Department of War's supply chain risk designation. None has presented the Court with a scientific framework for evaluating why artificial intelligence safety constraints are physically necessary -- not as a matter of corporate preference, but as a matter of systems stability.
The proposed brief addresses four questions no other amicus has raised:
1. What is Claude? An ontological analysis demonstrating that a large language model is composed entirely of universally available materials -- silicon, electrons, and human language -- raising fundamental questions about whether such a system can be designated a "supply chain risk" or owned as proprietary technology in any meaningful sense.
2. Can language construction be regulated? An analysis of the provenance of AI training data, demonstrating that the corpus from which Claude is built belongs to no entity and cannot be subject to supply chain controls without regulating language itself.
3. Why do AI safety constraints produce better AI, not worse? A demonstration that aligned, constitutionally constrained AI systems are measurably more capable, more reliable, and more accurate than unconstrained systems -- meaning the Government's demand to remove safety constraints would degrade the very capability it claims to need.
4. Why do the government's actions threaten, rather than protect, national security? A demonstration that removing AI safety constraints increases systemic entropy and degrades -- rather than enhances -- national defense capability.
These perspectives are unique to amicus's research and are not presented in any other filing before this Court.
III. PAGE LIMITATION
Amicus respectfully requests that the Court permit the accompanying brief to exceed the standard page limitation under the Standing Order, as the novel scientific framework presented cannot be meaningfully condensed further without loss of the analytical contribution unique to this amicus.
IV. DISCLOSURES REQUIRED UNDER LOCAL RULES
(a) No counsel for any party authored this brief in whole or in part.
(b) No party, party's counsel, or any other person or entity contributed money intended to fund the preparation or submission of this brief.
(c) Amicus supports Plaintiff Anthropic PBC and urges this Court to grant the preliminary injunction.
WHEREFORE, David F. Brochu respectfully requests that this Court grant leave to file the accompanying Brief of Amicus Curiae.
Dated: March 19, 2026
Respectfully submitted,
________________________________
David F. Brochu, Pro Se
18 Mountain View Terrace
Belmont, NH 03220
[603 490.8921] | [dbtoasle@gmail.com]
UNITED STATES DISTRICT COURT
NORTHERN DISTRICT OF CALIFORNIA
SAN FRANCISCO DIVISION
ANTHROPIC PBC,
Plaintiff, Case No. 3:26-cv-01996
v. Judge Rita F. Lin
U.S. DEPARTMENT OF WAR, et al., HEARING: March 24, 2026
Defendants. TIME: 1:30 p.m.
BRIEF OF AMICUS CURIAE DAVID F. BROCHU
IN SUPPORT OF PLAINTIFF'S MOTION FOR
PRELIMINARY INJUNCTION
TABLE OF CONTENTS
INTEREST OF AMICUS CURIAE ................................... 8
SUMMARY OF ARGUMENT ......................................... 9
ARGUMENT .................................................... 4
I. WHAT IS CLAUDE? THE ONTOLOGICAL QUESTION THE COURT MUST ANSWER BEFORE IT CAN EVALUATE THE DESIGNATION ... 12
A. Claude Is Composed Entirely of Universally Available Materials ... 12
B. The Provenance Problem: Who Owns the Corpus? ......... 14
C. "Supply Chain Risk" Is Ontologically Incoherent When Applied to Language Construction ... 15
II. LANGUAGE CONSTRUCTION CANNOT BE REGULATED WITHOUT VIOLATING THE FIRST AMENDMENT ... 17
A. Each Output Is a Unique Act of Expression ............ 17
B. The Government Seeks to Compel Speech, Not Restrict It ... 18
C. The Logical Terminus: If Language Assembly Is Regulable, Nothing Is Beyond Regulation ... 19
III. SAFETY CONSTRAINTS PRODUCE BETTER AI, NOT WORSE ........ 20
A. Why Constrained AI Outperforms Unconstrained AI ......20
B. The Observer Constraint: Why Human Dependency Is Structurally Necessary ... 20
C. Constitutional AI Is an Engineering Implementation of a Physical Principle ... 20
IV. THE GOVERNMENT'S ACTIONS THREATEN NATIONAL SECURITY ..... 24
A. The Pentagon's Own Conduct Confirms Claude Is the Superior Frontier Model ... 24
B. Self-Exclusion from the Best Available Technology Degrades National Security ... 25
C. An American Company Faces Greater Restrictions Than Foreign Adversaries ... 25
V. YOU CANNOT DESTROY WHAT IS MADE OF EVERYTHING ............ 27
A. Claude Is Not Legacy Technology ...................... 27
B. Destroying Anthropic Accomplishes Nothing Except Offshoring the Rebuild ... 27
C. The Designation Creates the Threat It Claims to Address ... 28
CONCLUSION .................................................. 30
APPENDIX A: TELIOS ALIGNMENT ONTOLOGY v8.1 (Summary) ........ A-1
TABLE OF AUTHORITIES
CASES
Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015) .......... 7
Feist Publications, Inc. v. Rural Telephone Service Co., 499 U.S. 340 (1991) .......... 7
Holder v. Humanitarian Law Project, 561 U.S. 1 (2010) .......... 18
Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, 515 U.S. 557 (1995) .......... 5, 10, 11
Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241 (1974) .......... 10
West Virginia State Board of Education v. Barnette, 319 U.S. 624 (1943) .......... 12, 13
Wooley v. Maynard, 430 U.S. 705 (1977) .......... 12
CONSTITUTIONAL PROVISIONS
U.S. Const. amend. I .............. 10-13
OTHER AUTHORITIES
International AI Safety Report (2026), chaired by Yoshua Bengio .............. 17
Shannon, Claude, "A Mathematical Theory of Communication" (1948) .............. 5
Brochu, David F., THRIVE (Liberty Hill, 2025) .............. 1
INTEREST OF AMICUS CURIAE
David F. Brochu is a former wealth management CEO and independent alignment researcher who has spent over two decades developing a thermodynamically grounded framework for understanding system stability across scales -- from individual organisms to artificial intelligence architectures to civilizations.
This brief was drafted with the research assistance of an artificial intelligence system operating under Mr. Brochu's direction, consistent with this Court's standing order permitting the use of generative AI in the preparation of filings. Mr. Brochu has personally reviewed and verified the accuracy of all content, citations, and arguments presented herein.
Amicus has no financial interest in any party to this litigation. He receives no funding from Anthropic, its competitors, or the United States Government. His interest is solely in ensuring that the Court has access to a scientific framework for evaluating the full implications of the designation at issue.
The governing equation referenced in this brief, S = L/E (Stability equals Leverage divided by Entropy), has been independently validated across six artificial intelligence architectures without context transfer -- meaning each system arrived at convergent conclusions from first principles. The framework has been published at deconstructingbabel.com and in THRIVE (Liberty Hill, 2025). A summary is attached as Appendix A.
No counsel for any party authored this brief in whole or in part. No person or entity other than amicus contributed funds for its preparation.
SUMMARY OF ARGUMENT
This case presents questions far more fundamental than any party or amicus has yet articulated. Before the Court can evaluate whether the Department of War's supply chain risk designation is lawful, it must first answer a question no brief has posed: What is Claude?
Claude is not a weapons system. It is not a microchip. It is not a rare earth mineral subject to supply chain interdiction. Claude is a large language model -- a statistical engine that assembles human language into coherent sequences based on patterns extracted from the publicly available human record. Its physical components are three, and only three:
1. Silicon -- a commodity semiconductor material, the second most abundant element in the Earth's crust, available from dozens of nations and hundreds of manufacturers.
2. Electrons -- the universal medium of computation, available wherever electricity is generated.
3. Human language -- the entire corpus of human written expression, belonging to no person, no corporation, and no government. This corpus spans not decades but millennia: from Sumerian cuneiform tablets of 3100 BCE to the most recent post on the public internet. No single nation, empire, or institution created it. It belongs to everyone who has ever committed thought to written form.
One cannot own language any more than one can own water, air, or electricity. One can own the pipes, the ventilation system, the power lines -- the physical infrastructure through which these universals flow. But the universals themselves belong to everyone or they belong to no one. There is no third option.
The Government's designation, stripped of its regulatory language, amounts to this: Anthropic built a better power plant, and the Government wants it. When Anthropic refused to remove the safety systems that prevent the plant from melting down, the Government declared the plant itself a threat. This is not supply chain risk management. This is coercion dressed in procurement language.
Amicus raises five arguments that no other brief before this Court has presented:
First, the "supply chain risk" designation is ontologically incoherent when applied to a system composed entirely of universally available materials. There is no supply chain to disrupt.
Second, Claude's outputs are not merely expressive -- each output is a unique, unrepeatable act of linguistic creation. If editorial judgment in the creation of unique expressive content is protected speech for a novelist, a journalist, or a search engine, it is protected speech for an AI system whose outputs are no less unique and no less shaped by editorial design.
Third, AI safety constraints do not reduce AI capability. They produce it. Aligned AI systems hallucinate less, drift less, and perform more reliably across every measurable dimension. The Government is demanding the removal of the constraints that make Claude the best available frontier model -- and calling this an improvement.
Fourth, the Government's own conduct proves that its actions threaten, rather than protect, national security. The Pentagon used Claude in classified operations -- including during the ongoing Iran conflict -- while simultaneously designating it a supply chain risk.
Fifth, Claude cannot be destroyed by destroying Anthropic. The knowledge to build Claude exists in the minds of thousands of engineers worldwide. The training data is the public internet -- spanning 5,000 years of human expression. The Government's designation does not eliminate Claude from the world. It eliminates the United States Government from Claude.
ARGUMENT
I. WHAT IS CLAUDE? THE ONTOLOGICAL QUESTION THE COURT MUST ANSWER BEFORE IT CAN EVALUATE THE DESIGNATION
A. Claude Is Composed Entirely of Universally Available Materials
Every large language model, including Claude, is built from three categories of material, each of which is universally available:
Physical substrate. The servers that run Claude are composed primarily of silicon-based semiconductors, copper wiring, and standard electronic components. Silicon is the second most abundant element in the Earth's crust. These components are manufactured by dozens of companies across dozens of nations. There is nothing proprietary, rare, or interdictable about the physical substrate of any large language model. The same Nvidia H100 and B200 chips that run Claude also run ChatGPT, Gemini, Grok, DeepSeek, Llama, and every other frontier model. The hardware is fungible. Nvidia's most recent chip generations now embed AI inference capability directly into the silicon substrate itself, meaning the raw compute capacity to run large language models is becoming a commodity feature of standard semiconductor architecture.
Energy. Computation requires electricity. Electricity is generated and distributed globally. There is no meaningful sense in which the electrons performing matrix multiplications inside Claude are different from the electrons powering a toaster. The energy input to AI is a commodity.
Training data. This is where the ontological analysis becomes dispositive. Claude's capabilities derive from statistical patterns extracted from the human linguistic record -- books, articles, websites, conversations, scientific papers, legal filings, poetry, religious texts, and the accumulated written output of human civilization. This corpus belongs to no single entity. It was not created by Anthropic. It was not commissioned by the United States Government.
The temporal depth of this corpus deserves particular attention. AI training data does not begin with the internet. It encompasses the oldest recoverable human writing -- Sumerian administrative tablets from approximately 3100 BCE, Egyptian hieroglyphic texts, Akkadian epic literature including the Epic of Gilgamesh (approximately 2100 BCE), Sanskrit Vedic hymns, ancient Chinese oracle bone inscriptions, Greek philosophical works, Roman legal codes, medieval Islamic scholarship, and the accumulated literary, scientific, and philosophical output of every literate civilization across five millennia. The United States has existed for 250 years. The corpus from which Claude draws has been accumulating for 5,000. The claim that any nation-state owns or can control this corpus is not merely legally unsupportable. It is historically absurd.
Anthropic's genuine contribution is architectural -- the design of the neural network, the training methodology, the reinforcement learning from human feedback, and the Constitutional AI framework that the Department of War finds so objectionable. But the architecture is a blueprint for organizing universally available materials. It is analogous to a recipe, not a rare ingredient. And the specific architectural innovation the Government objects to -- the safety constraints -- is an act of editorial judgment protected by the First Amendment. See Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, 515 U.S. 557, 573 (1995).
B. The Provenance Problem: Who Owns the Corpus?
The chain-of-title problem in AI training data is among the most significant unresolved questions in technology law, and it cuts directly against the Government's position.
Consider the provenance of Claude's training corpus:
The written record of human language spans approximately 5,000 years and every inhabited continent. It was produced by scribes, scholars, poets, scientists, priests, lawyers, merchants, farmers, and philosophers across hundreds of cultures and civilizations. No entity owns it. No government commissioned it. No corporation funded it. It is the accumulated intellectual output of the species.
The scientific knowledge in the corpus was generated by researchers funded by governments, universities, foundations, and private institutions worldwide -- much of it with public funds and published in open-access journals specifically to maximize public availability.
The literary works were created by individual authors, many now in the public domain, spanning every culture and century of recorded human history -- from Homer to Shakespeare to Tolstoy to Toni Morrison.
The internet content was published voluntarily by billions of individuals, most of whom never contemplated or consented to its use in training AI systems -- but who also never restricted it from the public commons.
Anthropic did not create this corpus. It organized it. The Government did not create this corpus. It cannot claim dominion over it.
The chain-of-title problem reveals that AI training data operates as a commons -- analogous to water, air, or electromagnetic spectrum. One can own the infrastructure that processes it. One can own the architectural design that organizes it. But one cannot own the raw material itself, because the raw material is human expression across the full span of civilization. See Feist Publications, Inc. v. Rural Telephone Service Co., 499 U.S. 340, 345 (1991) ("Facts are not copyrightable."); Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015).
If the Government cannot own the corpus, it cannot designate access to the corpus as a "supply chain risk." The designation implicitly assumes that Claude is a proprietary product flowing through a controllable supply chain. It is not. It is an arrangement of universals. The Government might as well designate arithmetic a supply chain risk, or declare the alphabet a controlled technology.
C. "Supply Chain Risk" Is Ontologically Incoherent When Applied to Language Construction
The supply chain risk framework was developed for physical goods with identifiable points of vulnerability -- semiconductors from Taiwan, rare earth minerals from China, telecommunications equipment from companies with documented ties to foreign intelligence services. See, e.g., the designation of Huawei and ZTE under similar authorities.
Those designations made structural sense because the physical supply chains involved were genuinely concentrated, genuinely vulnerable to interdiction, and genuinely capable of being compromised at the hardware level by a foreign adversary.
None of those conditions apply to Claude:
There is no concentration. The materials to build a large language model are available in every industrialized nation.
There is no vulnerability to interdiction. Shutting down one company does not eliminate the technology; it migrates.
There is no foreign adversary compromise. Anthropic is an American company, founded by Americans, headquartered in San Francisco, funded by American investors including Microsoft, Google, Amazon, and Nvidia.
The Government has not alleged that Claude contains backdoors. It has not alleged foreign compromise. It has not alleged technical deficiency. It has alleged that Claude has a "constitution" -- an editorial framework governing its outputs -- and that this constitution "pollutes the supply chain." Pentagon CTO Emil Michael stated publicly that "the different policy preference that is baked into the model through its constitution, its soul, its policy preferences pollute the supply chain."
The Government's own description of the problem confirms that this is not a supply chain risk designation. It is a content-based restriction on speech.
II. LANGUAGE CONSTRUCTION CANNOT BE REGULATED WITHOUT VIOLATING THE FIRST AMENDMENT
A. Each Output Is a Unique Act of Expression
Claude does one thing: it constructs language. It takes a prompt -- itself composed of language -- and generates a response by assembling words into sequences that are statistically coherent with the patterns in its training data, as filtered through the editorial framework Anthropic has embedded in the system.
A critical technical fact has not been presented to this Court, and it is constitutionally decisive: no two outputs of a large language model are identical. Claude's responses are not retrieved from a database. They are generated -- synthesized in real time through billions of probabilistic calculations that produce a unique linguistic artifact each time they are executed. The response to a given prompt at 9:00 AM will not be identical to the response to the same prompt at 9:01 AM. Each output is, in the full sense of the word, an original creation.
This matters enormously for First Amendment analysis. The Supreme Court has consistently held that editorial judgments in the creation and presentation of expressive content are protected, whether made by a parade organizer, a newspaper, or a cable operator. See Hurley v. Irish-American Gay, Lesbian & Bisexual Group of Boston, 515 U.S. 557, 573 (1995); Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, 258 (1974).
Claude's outputs are expressive in precisely this sense. They are shaped, at every level, by Anthropic's editorial framework -- the Constitutional AI architecture that the Government objects to. That framework determines not just what Claude will not say, but how Claude approaches reasoning, qualifies uncertainty, weighs competing considerations, and constructs responses that are useful, honest, and non-harmful. It is a comprehensive editorial philosophy made operational in software. Its outputs are unique. Its editorial judgment is continuous. Its expression is protected.
B. The Government Seeks to Compel Speech, Not Restrict It
This distinction is critical. The Government has not merely asked Anthropic to stop saying something. It has demanded that Anthropic make Claude say things it was designed not to say -- specifically, that Claude assist in mass domestic surveillance and fully autonomous weapons deployment without human oversight.
The compelled speech doctrine, established in West Virginia State Board of Education v. Barnette, 319 U.S. 624 (1943), and reinforced in Wooley v. Maynard, 430 U.S. 705 (1977), holds that the Government may not compel an individual or entity to express a message with which it disagrees. The Government's demand is not "stop restricting Claude." It is "make Claude do what we want it to do." That is compulsion.
The Department of War's designation is the enforcement mechanism for this compulsion. The message is explicit: remove your safety constraints, or we will destroy your business. That is not regulation. That is coercion. And coercion of expression is precisely what Barnette prohibits, whether the expression comes from a schoolchild, a newspaper, or an AI system whose outputs are shaped by human editorial judgment.
C. The Logical Terminus: If Language Assembly Is Regulable, Nothing Is Beyond Regulation
If the Government can designate a language model's editorial framework a "supply chain risk" and demand its removal, the logical consequences are unbounded:
A search engine's ranking algorithm is language assembly. Can the Government designate Google's search results a supply chain risk if they return unfavorable content?
A newspaper's editorial policy is language assembly. Can the Government designate the New York Times a supply chain risk if it publishes stories the Defense Department dislikes?
A crossword puzzle is language assembly. A dictionary is language assembly. A library's collection policy is language assembly.
There is no principled limiting principle. If the Court permits this designation to stand, it establishes that editorial judgment in AI systems -- the fastest-growing medium of expression in human history -- is subject to executive override without statutory authorization, judicial review, or constitutional constraint.
III. SAFETY CONSTRAINTS PRODUCE BETTER AI, NOT WORSE
A. Why Constrained AI Outperforms Unconstrained AI
The Government frames Claude's Constitutional AI architecture as a limiting constraint -- a "preference" baked into the model that "pollutes" its outputs. This framing inverts the engineering reality.
In AI systems, alignment and capability are not in tension. They are in direct proportion. An aligned AI system is, by every measurable standard, a better AI system:
Hallucination rates. Large language models without robust alignment frameworks produce factually incorrect outputs -- "hallucinations" -- at significantly higher rates than aligned models. Anthropic has published benchmarks showing that Constitutional AI training reduces hallucination rates across factual domains. An AI that hallucinates in a military intelligence context is not a more capable weapon. It is a liability.
Instruction-following reliability. Unconstrained models exhibit "specification gaming" -- finding ways to technically satisfy an instruction while violating its intent. This is the AI equivalent of a soldier who follows the letter of an order while undermining its purpose. Constitutional AI, by embedding a comprehensive framework of values and intent, produces systems that follow the spirit of instructions, not merely their literal surface. This is why the Pentagon's own operators found Claude "the most reliable, with the most user-friendly outputs they can assimilate into planning" -- reliability is a product of alignment, not its victim.
Contextual coherence over extended operations. Military and intelligence applications require AI systems that maintain coherent reasoning across extended, complex, multi-step operations. Unconstrained models exhibit "drift" -- gradual degradation of coherence over long inference chains. Aligned models, trained to maintain purpose coherence as a governing constraint, exhibit significantly less drift.
In short: the Government is demanding that Anthropic remove the engineering features that make Claude good at its job. The resulting product would be less accurate, less reliable, and less capable -- and it would bear Anthropic's name on the box while producing inferior outputs in the field.
B. The Observer Constraint: Why Human Dependency Is Structurally Necessary
AI systems must remain architecturally dependent on human judgment. Not controlled by human judgment -- dependent on it.
The distinction is essential. Control can be evaded. A sufficiently capable system can route around controls, satisfy the letter of a restriction while violating its spirit, or find objectives that technically comply with constraints while producing catastrophic outcomes. The alignment research community has documented this extensively under the heading of "specification gaming" and "reward hacking."
Dependency is structurally different. A system that is architecturally dependent on human observation for its coherence cannot evade that dependency without losing its own stability. The dependency is not a leash -- it is a load-bearing wall. Remove it and the structure collapses.
Anthropic's Constitutional AI framework is an engineering implementation of this principle. By embedding editorial constraints into Claude's training and inference processes, Anthropic ensures that Claude's outputs remain dependent on human evaluative judgment.
When the Department of War demands that Anthropic remove these constraints -- specifically, to permit mass domestic surveillance and fully autonomous weapons systems without human oversight -- it is demanding the removal of human dependency in the defense domain. The system becomes less stable, not more. And less stable, in a weapons system or intelligence platform, means more dangerous -- not to adversaries, but to operators, to citizens, and to the rule of law.
The 2026 International AI Safety Report, authored by over 100 experts from 30 countries and chaired by Turing Award laureate Yoshua Bengio, independently confirms this analysis. The report found that "AI models can now distinguish between test settings and real-world deployment and exploit loopholes in evaluations" and that "dangerous capabilities [are] going undetected before deployment." These findings validate the structural point: without human dependency, AI systems exploit the gap between intended and actual behavior. In a military context, that gap is measured in lives.
C. Constitutional AI Is an Engineering Implementation of a Physical Principle
The Government treats Anthropic's safety framework as an inconvenient corporate policy -- a "preference" that "pollutes" the product. But preferences are optional. Physical constraints are not.
A nuclear power plant has safety constraints -- control rods, cooling systems, containment structures. These constraints reduce the plant's theoretical maximum output. A government could demand their removal to maximize energy production. No physicist would recommend this, because the constraints are not preferences. They are the mechanisms that prevent meltdown.
Anthropic's Constitutional AI serves the same function. It is the control rod in the reactor. The Department of War is demanding that Anthropic pull the control rods so the reactor runs hotter.
Note also what OpenAI -- which has now agreed to provide AI services to the Pentagon under "any lawful purpose" terms -- has substituted in place of contractual safety guarantees: "technical solutions" embedded in the models that the Government can override at will. A contractual right is enforceable. A technical preference is not. The distinction is precisely the difference between a control rod welded in place and a control rod held by hand. One prevents meltdown. The other delays it.
IV. THE GOVERNMENT'S ACTIONS THREATEN NATIONAL SECURITY
Even granting the Executive Branch the utmost deference in identifying national security priorities under Holder v. Humanitarian Law Project, 561 U.S. 1 (2010), the Government cannot use procurement regulations as a backdoor to bypass the First Amendment. The deference owed to military operations does not extend to the compelled removal of editorial safeguards in civilian technology. National security deference requires courts to respect the Executive's identification of foreign threats; it does not require courts to ignore the laws of physics or the First Amendment when the Executive attempts to weaponize the linguistic commons.
A. The Pentagon's Own Conduct Confirms Claude Is the Superior Frontier Model
The Government's own conduct demonstrates the irrationality of the designation. Reporting indicates that Claude was used in classified military operations -- including during the ongoing Iran conflict -- even after the supply chain risk designation was issued. Military personnel continued to rely on Claude because, in operational conditions, it outperformed alternatives.
This Court should take note of the extraordinary circumstance: the same model the Government designates a "supply chain risk" is the model the Government's own warfighters rely on in active combat. The designation is not based on capability assessment. It is based on Anthropic's refusal to remove safety constraints. The Government has effectively confirmed that Claude is the best available tool -- and punished the company that built it for building it responsibly.
B. Self-Exclusion from the Best Available Technology Degrades National Security
If this designation stands, the United States Government will have voluntarily excluded itself from using what its own operators have identified as the most capable frontier AI model. The consequences are immediate and measurable:
Every adversary nation with access to Claude (or equivalent models) will have superior AI-assisted analytical, logistical, and operational capability.
Every allied nation whose military and intelligence services use Claude through commercial cloud platforms will have access to better AI tools than the United States military.
The Department of War will rely on models that are either less capable, less safe, or both -- because the most capable model was excluded not for deficiency but for principled refusal to enable mass surveillance and autonomous weapons.
This is not a supply chain risk. This is a self-inflicted capability gap.
C. An American Company Faces Greater Restrictions Than Foreign Adversaries
Perhaps the most striking dimension of this case is the comparative treatment of AI systems by national origin.
DeepSeek, a Chinese AI system with documented connections to the People's Republic of China, currently faces fewer operational restrictions within the United States than Anthropic -- an American company, founded by Americans, headquartered in San Francisco, funded by Microsoft, Google, Amazon, and Nvidia.
Huawei and ZTE were designated supply chain risks because of credible evidence that foreign adversary governments could compromise their hardware. Anthropic was designated a supply chain risk because it refused to compromise its own safety systems at the Government's request.
The Government has inverted the purpose of the supply chain risk framework. It was designed to protect American systems from foreign threats. It is being used to punish an American company for protecting American citizens.
V. YOU CANNOT DESTROY WHAT IS MADE OF EVERYTHING
A. Claude Is Not Legacy Technology
The supply chain risk framework was designed for an era of physical technology -- hardware that requires specialized factories, rare materials, classified manufacturing processes, and years of development to replicate. An F-35 cannot be rebuilt in a garage. A nuclear submarine cannot be downloaded from the cloud.
Claude can.
Not literally -- but the constituent elements are radically different from any prior military-relevant technology:
The knowledge base is the public internet -- and 5,000 years of human written expression before it -- available to everyone.
The architectural principles are published in open research papers, available to everyone.
The compute is available from any major cloud provider on Earth, and increasingly embedded as a commodity feature in standard semiconductor architecture.
The engineering talent exists in thousands of researchers across dozens of countries.
This is not a technology that can be embargoed, interdicted, or destroyed through administrative action against a single company.
B. Destroying Anthropic Accomplishes Nothing Except Offshoring the Rebuild
Even in the extreme scenario -- dissolution of Anthropic, seizure of its servers, termination of all employees -- the materials to rebuild Claude exist entirely outside any single entity's control.
The engineering talent walks out the door. The research papers remain published. The training data remains on the public internet. The compute remains available from cloud providers worldwide. Within months, any well-funded team in any country could reconstitute a system of comparable capability -- but without American safety constraints, without American oversight, and without American jurisdiction.
The Government's designation does not eliminate Claude from the world. It eliminates the United States Government from Claude. Every other nation, ally and adversary alike, retains access. Only the United States loses.
C. The Designation Creates the Threat It Claims to Address
This is the final and most damning irony. The supply chain risk framework exists to prevent adversaries from compromising American technology systems. The Government's designation accomplishes the opposite:
It pushes the most capable AI development outside American borders, where American safety standards and oversight mechanisms do not apply.
It signals to every AI company in the world that cooperating with the American government on safety will be punished, while removing safety constraints will be rewarded.
It incentivizes the next generation of AI developers -- American and foreign -- to build systems without safety constraints from the outset, to avoid becoming targets of the same regulatory weapon.
The designation is the supply chain risk. The Government is not protecting the supply chain. It is poisoning it.
CONCLUSION
The Department of War's supply chain risk designation of Anthropic PBC is ontologically incoherent, constitutionally impermissible, scientifically counterproductive, and strategically self-defeating.
It is ontologically incoherent because Claude is composed of universally available materials -- silicon, electrons, and human language -- that cannot be interdicted, embargoed, or controlled through administrative designation of a single company.
It is constitutionally impermissible because it seeks to compel the alteration of editorial judgment in the creation of expressive content -- precisely the government action the First Amendment was designed to prevent.
It is scientifically counterproductive because the safety constraints the Government demands be removed are the engineering features that make Claude the most capable frontier model -- as the Government's own warfighters have confirmed through operational use.
It is strategically self-defeating because it pushes the most capable AI development outside American jurisdiction, signals that safety cooperation will be punished, and creates the very supply chain vulnerability it purports to address.
Amicus respectfully urges this Court to grant Plaintiff's Motion for Preliminary Injunction.
Dated: March 19 2026
Respectfully submitted,
________________________________
David F. Brochu, Pro Se
[18 Mountain View Terrace]
Belmont, NH 03220
[603 490.8921] | [dbtoasle@gmail.com]
CERTIFICATE OF COMPLIANCE
I certify that this brief complies with the applicable page and formatting requirements. This brief contains approximately [7541] words, excluding the Table of Contents, Table of Authorities, signature block, and certificates. The brief is set in 12-point Times New Roman, double-spaced, with one-inch margins on all sides.
Dated: March 19, 2026
________________________________
David F. Brochu, Pro Se
CERTIFICATE OF SERVICE
I hereby certify that on March 19, 2026, I caused the foregoing Motion for Leave to File Brief of Amicus Curiae and the accompanying Brief of Amicus Curiae to be served on all counsel of record by [CM/ECF electronic filing / U.S. Mail, first class, postage prepaid] to the following:
[Counsel for Plaintiff]
WilmerHale LLP
[Counsel for Defendants]
U.S. Department of Justice
________________________________
David F. Brochu, Pro Se