U.S. District Judge Rita Lin didn't mince words. In a preliminary injunction issued Thursday, she wrote that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."
With that, Lin blocked the Department of Defense from enforcing its designation of Anthropic as a "supply chain risk" under the Federal Acquisition Supply Chain Security Act (FASCSA), a statute designed to keep foreign adversaries out of government technology infrastructure. She also stayed President Trump's February 27 social media directive ordering all federal agencies to immediately cease using Anthropic's Claude models.
The ruling, handed down in San Francisco federal court, is the first judicial check on the administration's month-long campaign to sever Anthropic from the federal AI ecosystem.
How we got here
The timeline matters. In July 2025, Anthropic signed a $200 million contract with the Pentagon. Claude became the first frontier AI model approved for use on classified defense networks. The deal included Anthropic's acceptable use policy (AUP), which prohibited two specific applications: mass domestic surveillance of Americans and fully autonomous weapons systems that select and engage targets without human intervention.
By September, the Pentagon was pushing to renegotiate. The department wanted unfettered access to Claude "for all lawful purposes" without limitation, according to Mayer Brown's analysis of the dispute. Weeks of failed negotiations followed. The Pentagon set a deadline of 5:01 p.m. on February 27 for Anthropic to accept the government's terms.
Anthropic didn't blink. Hours later, Trump posted on Truth Social ordering agencies to "immediately cease" using Anthropic's technology. Defense Secretary Pete Hegseth then designated Anthropic a supply chain risk — a label previously reserved for foreign adversaries and entities like Acronis AG, which received the first-ever FASCSA exclusion order in September 2025.
The designation meant every defense contractor working with the military — including Amazon, Microsoft, and Palantir — would need to certify they don't use Claude in any government work.
What the judge actually said
During a 90-minute hearing on Tuesday, Judge Lin pressed the government's lawyer, Eric Hamilton, on the DOD's rationale. Hamilton argued the department had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems."
Lin wasn't buying it. "What I'm hearing from you is that it's enough if an IT vendor is stubborn and insists on certain terms and it asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy," she said. "That seems a pretty low bar."



