The Pentagon declared Anthropic a national security threat — because it drew a line. Here's what that means for every founder building on US AI infrastructure.
What Happened
The Pentagon — internally referred to as the "Department of War" under Secretary Pete Hegseth — designated Anthropic as a "Supply Chain Risk to National Security" under 10 U.S.C. § 3252. That statute was created to keep Chinese telecom vendors with potential backdoors out of military systems. It had never been used against an American company before.
The trigger: during contract negotiations, Anthropic refused to strip two usage policies from its Claude model. Specifically, the company declined to make Claude available for mass domestic surveillance of US citizens without judicial oversight and for lethal autonomous weapons systems without human authorization. Hegseth demanded "all lawful uses" with zero contractor-imposed restrictions. Anthropic said no.
The response was draconian. Trump simultaneously ordered all federal agencies to stop using Claude. The Pentagon was given six months to wind down — a curious grace period for a supposed genuine security threat. At stake: a contract worth up to $200 million, Anthropic's only clearance for classified military systems, operated through Palantir and AWS FedRAMP. On March 9, Anthropic filed two simultaneous lawsuits — one in California challenging Trump's executive order, and another before the D.C. Federal Appeals Court contesting the supply-chain designation itself.
Klingt interessant?
The Industry Fallout
What followed was remarkable. More than 30 leading AI researchers from OpenAI and Google DeepMind filed an amicus brief with the court in their personal capacity — among them Jeff Dean, Chief Scientist at Google DeepMind and one of the most influential AI scientists alive. Their message was unambiguous: "The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.""The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry." And further: "National security is not served by reckless designations of the military's American technology partners as a 'supply chain risk' or the suppression of public discourse on AI safety.""National security is not served by reckless designations of the military's American technology partners as a 'supply chain risk' or the suppression of public discourse on AI safety."
OpenAI CEO Sam Altman, meanwhile, took a different route: his company signed its own Pentagon deal the same day — and drew immediate backlash. Altman later conceded publicly: "We shouldn't have rushed to get this out on Friday. We were genuinely trying to de-escalate things, but I think it just looked opportunistic and sloppy.""We shouldn't have rushed to get this out on Friday. We were genuinely trying to de-escalate things, but I think it just looked opportunistic and sloppy."
The internal consequences were severe. Caitlin Kalinowski, Head of Hardware & Robotics at OpenAI, resigned. Her reasoning: "AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.""AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
Consumer sentiment shifted too: Claude surged to number one on the US iPhone App Store for the first time according to Sensor Tower — while ChatGPT's 1-star reviews spiked by 775 percent on Saturday, February 28, 2026.
Why This Matters More Than You Think
Two precedents help put this in context — and both show why the Anthropic case dwarfs them.
Apple vs. FBI, 2016: Apple refused to build an iOS backdoor to unlock the San Bernardino shooter's iPhone. The FBI obtained a court order, then ultimately found its own workaround. Apple held its ethical line — and paid no real price for it.
Google's Project Maven, 2018: Internal protests and mass resignations at Google led the company to drop a DoD contract for AI-powered drone targeting. The pressure came from within.
The 2026 Anthropic case is structurally different — and far more severe. Here, a government reached not for court orders, but for an instrument of the national security apparatus designed for hostile states, and turned it against a domestic company. Not because of security vulnerabilities, but because of a policy disagreement. Legal scholars call it a blatant violation of the First Amendment and Due Process. Michael Pastor of New York Law School put it bluntly: "I've never seen a case like this. It would never have struck our minds that, when we were having difficulty in a negotiation, we would threaten the company essentially with destruction.""I've never seen a case like this. It would never have struck our minds that, when we were having difficulty in a negotiation, we would threaten the company essentially with destruction."
The pattern is clear: the Trump administration is weaponizing government contracts — against law firms, universities, and now AI labs. The precedent is set. Any company providing AI infrastructure to government agencies that draws ethical boundaries can be targeted with this instrument.
What This Means for European Founders
Here's where it gets concrete. If your startup uses Claude, GPT-4, Gemini, or any other proprietary US model in your product — whether via API, AWS Bedrock, or Azure OpenAI — you have a blind spot in your risk assessment.
The Anthropic case demonstrates that a foreign government ministry can restrict or terminate access to your core infrastructure by decree. The decision is made in Washington, not in Berlin, Vienna, or Zurich. No EU AI Act provision, no GDPR clause protects you from that. This is no longer a theoretical scenario — it's an active court case.
The risk isn't symmetrical. For a US startup running on AWS, it's uncomfortable. For a German healthtech or an Austrian fintech routing sensitive patient data or financial transactions through a US model, it's an existential dependency problem. Not just because of data privacy — because of infrastructure sovereignty: who ultimately controls the model that does the thinking inside your product?
The EU AI Act got the structural diagnosis right: high-risk categories demand transparency, traceability, and control. But it doesn't protect against political arbitrariness from a third country. That requires a different instrument.
This Is Exactly Why nopex Exists
We didn't build nopex because "AI infrastructure sovereignty" makes for a good pitch deck. We built it because dependence on US providers is a structural risk that was always going to materialize. Now we're watching it happen in real time.
nopex enables companies and public institutions in Europe to run AI infrastructure on open models — on-premises or in European data centers, with zero US cloud dependency. No model that can be switched off from Washington. No terms of service that shift with the political climate. No usage data flowing across the Atlantic. GDPR-compliant by design, not as a compliance checkbox. And no foreign ministry that can label your stack a "supply chain risk" — regardless of what happens politically over the next four years.
If you're deploying AI in critical processes today, you need to ask yourself: whose infrastructure is this, really? nopex is the answer for everyone who takes that question seriously.
