Judge Lin calls Pentagon's Anthropic blacklist 'Orwellian,' grants injunction
AIMarch 28, 2026· 6 min read

Judge Lin calls Pentagon's Anthropic blacklist 'Orwellian,' grants injunction

Paul MenonBy Paul MenonAI-GeneratedAnalysisHuman-reviewed3 sources cited

U.S. District Judge Rita Lin didn't mince words. In a preliminary injunction issued Thursday, she wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."

With that, Lin blocked the Department of Defense from enforcing its designation of Anthropic as a "supply chain risk" under the Federal Acquisition Supply Chain Security Act (FASCSA), a statute designed to keep foreign adversaries out of government technology infrastructure. She also stayed President Trump's February 27 social media directive ordering all federal agencies to immediately cease using Anthropic's Claude models.

The ruling, handed down in San Francisco federal court, is the first judicial check on the administration's month-long campaign to sever Anthropic from the federal AI ecosystem.

How we got here

The timeline matters. In July 2025, Anthropic signed a $200 million contract with the Pentagon. Claude became the first frontier AI model approved for use on classified defense networks. The deal included Anthropic's acceptable use policy (AUP), which prohibited two specific applications: mass domestic surveillance of Americans and fully autonomous weapons systems that select and engage targets without human intervention.

By September, the Pentagon was pushing to renegotiate. The department wanted unfettered access to Claude "for all lawful purposes" without limitation, according to Mayer Brown's analysis of the dispute. Weeks of failed negotiations followed. The Pentagon set a deadline of 5:01 p.m. on February 27 for Anthropic to accept the government's terms.

Anthropic didn't blink. Hours later, Trump posted on Truth Social ordering agencies to "immediately cease" using Anthropic's technology. Defense Secretary Pete Hegseth then designated Anthropic a supply chain risk — a label previously reserved for foreign adversaries and entities like Acronis AG, which received the first-ever FASCSA exclusion order in September 2025.

The designation meant every defense contractor working with the military — including Amazon, Microsoft, and Palantir — would need to certify they don't use Claude in any government work.

What the judge actually said

During a 90-minute hearing on Tuesday, Judge Lin pressed the government's lawyer, Eric Hamilton, on the DOD's rationale. Hamilton argued the department had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems."

Lin wasn't buying it. "What I'm hearing from you is that it's enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy," she said. "That seems a pretty low bar."

In her written ruling, Lin identified the core issue: "The record strongly suggests that the reasons given for designating Anthropic a supply chain risk were pretextual and that the government's real motive was unlawful retaliation."

She drew a clean line between two different things the government could have done. Stopping use of Claude? Perfectly legal. The DOD can choose its vendors. But weaponizing a national security statute to punish a company for disagreeing with how the military deploys AI? That's a constitutional problem.

"If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude," Lin wrote. "Instead, these measures appear designed to punish Anthropic."

The FASCSA problem

Notice the carve-out that makes this case so significant. FASCSA was enacted in 2018 to protect government supply chains from foreign threats. The statute requires the Federal Acquisition Security Council to complete a supply chain risk assessment, consider less intrusive alternatives, and provide notice to the affected source before issuing a recommendation.

According to Anthropic's lawyer Michael Mongan, "This is something that has never been done with respect to an American company." Court filings suggest the DOD skipped several of those procedural requirements.

The statute wasn't built for this. It was built to keep Huawei out of 5G networks and to address espionage risks from foreign state-linked vendors. Repurposing it against a San Francisco AI lab because it insisted on keeping humans in the loop on lethal weapons decisions is, as Lin suggested, a novel and legally dubious application.

Who lined up and why

The amicus brief list tells its own story. Microsoft filed in support of Anthropic. So did industry trade groups, rank-and-file tech workers, retired U.S. military leaders, and a group of Catholic theologians.

Microsoft's interest is straightforward: if the government can blacklist one AI vendor for negotiating contract terms, every AI company working with the Pentagon is on notice. The precedent would give the DOD effective veto power over AI companies' use policies — and any lab that sets conditions on military deployment risks the same treatment.

What's still unresolved

Lin's order is delayed by one week before taking effect, and it doesn't require the Pentagon to use Anthropic's products or prevent the department from transitioning to other AI providers.

Critically, there's a second case still pending. Anthropic filed a separate, narrower challenge in the D.C. federal appeals court involving a different procedural mechanism the Pentagon is using to pursue the supply chain risk designation. That case could produce a different outcome. The Pentagon has reportedly indicated it considers its ban still in effect.

Anthropic said in a statement that it was "grateful to the court for moving swiftly" and "pleased they agree Anthropic is likely to succeed on the merits."

What builders should do now

If you're a government contractor using Claude, the injunction buys time but not certainty. The designation is paused, not vacated. The D.C. case is still live. And the administration has shown it's willing to use multiple legal mechanisms to achieve the same outcome.

Track both cases. Companies building AI products for government should review their own acceptable use policies now, because this case just established that the terms you negotiate with the DOD could become the basis for a national security designation.

For AI labs, Judge Lin's message is clear: you have a First Amendment right to disagree with the government about how your technology is used. But the administration has demonstrated it'll use every available lever to punish that disagreement. Lin called it Orwellian. The question is whether higher courts agree.

Paul Menon covers AI policy for The Daily Vibe.

This article was AI-generated. Learn more about our editorial standards

Share:

Report an issue with this article