Anthropic Sues Pentagon Over 'Supply Chain Risk' Blacklisting
Anthropic, the San Francisco-based artificial intelligence company, filed two lawsuits against the Department of Defense on Monday, escalating a months-long feud with the Trump administration into a major legal confrontation over the limits of government power and the future of AI safety standards in military contracting.
The suits, filed in the northern district court of California and the US court of appeals for the Washington DC Circuit, accuse the administration of unlawfully retaliating against Anthropic for refusing to permit its AI models to be used for mass domestic surveillance or fully autonomous lethal weapons. The company argues the government's actions violate constitutional protections, including its First Amendment rights.
"The federal government retaliated against a leading frontier AI developer for adhering to its protected viewpoint on a subject of great public significance — AI safety and the limitations of its own AI models — in violation of the Constitution," Anthropic said in filings cited by The Verge.
The Supply Chain Risk Designation
The Pentagon formally issued the supply chain risk designation last Thursday, the culmination of a dispute that had become unusually public even by the standards of Washington's turbulent technology policy landscape. It marks the first time this blacklisting mechanism has been used against a US company, a fact Anthropic has highlighted as evidence of the administration's punitive intent.
The designation carries serious commercial consequences. Under its terms, any company doing business with the federal government is required to sever all ties with Anthropic. The company says the ruling could cost it hundreds of millions of dollars in private deals, with the Financial Times reporting that the administration is effectively seeking to destroy Anthropic's economic value.
Roots of the Dispute
The conflict stems from Anthropic's attempt to implement what it describes as safety guardrails on the military's use of its Claude AI models. The company drew firm lines against deployment of its technology for mass domestic surveillance operations and for the development or operation of fully autonomous lethal weapons systems, positions that brought it into direct conflict with Pentagon procurement officials.
The Department of Defense responded by informing its suppliers that they could no longer use Anthropic's AI tools, citing the supply chain risk classification. Anthropic had previously vowed to contest any formal designation, and Monday's legal filings follow through on that commitment.
Stakes for the AI Industry
The case is being watched closely across the technology sector as a test of how far the government can go in pressuring AI companies to abandon their own safety policies. Anthropic's argument rests on the premise that the administration is punishing it not for technical failings or genuine security concerns, but for holding a particular viewpoint on one of the most contested questions in the field: how much autonomy AI systems should be permitted in life-and-death decisions.
For Anthropic, which depends heavily on commercial contracts to fund its research into AI safety, the financial threat is existential in nature. The company's legal challenge will now determine whether the government's novel use of supply chain risk powers against a domestic firm can withstand judicial scrutiny.

