The U.S. Department of War has officially designated Anthropic PBC as a supply chain risk and banned federal agencies from using its Claude AI model, following the company's refusal to accept 'any lawful use' contract terms.
The dispute began when the Secretary of War directed the Department to incorporate standard 'any lawful use' language into all AI service contracts. Anthropic, whose Claude model was the most widely deployed AI in Department systems—including classified networks—refused to waive its Usage Policy restrictions.
Anthropicdefended its position, stating that its Usage Policy prohibits uses that "pose unacceptable risks, including surveillance, compromising computer systems or networks, and designing weapons or other systems to cause harm or loss of human life." The company maintained these safeguards reflect its corporate judgment about enabling beneficial AI while mitigating potential harms.
The Department countered that it "has no interest in using AI to conduct mass surveillance of Americans (which is illegal)" and doesn't want autonomous weapons without human oversight. The real issue, officials argued, is principle: "We will not let ANY company dictate the terms regarding how we make operational decisions."
On February 27, President Trump issued a directive ordering all federal agencies to cease using Anthropic's technology within six months. That same day, the Secretary announced on social media that the Department would designate Anthropic a supply chain risk under 10 U.S.C. § 3252.
In a March 3 memo titled "Urgent Supply Chain Risk Analysis," the Under Secretary of War for Research and Engineering detailed the concerns. Unlike traditional software, AI models require constant tuning and updates. Anthropic's "privileged access" means it could "unilaterally alter system guardrails and model weights without DoW consent," potentially causing "a critical defense system failing to engage due to an unapproved, vendor-side modification."
The memo noted that during recent active military operations, Anthropic leadership questioned the use of their technology in warfighting systems, "which raised material doubts as to whether they would cause their software to stop working or cause some other disastrous action that would put our warfighters lives in danger."
Anthropichas sued the Department, seeking a preliminary injunction to block the ban. The case raises fundamental questions about private companies' ability to set ethical boundaries on government use of their technology, versus the military's need for unrestricted access to tools it deploys in combat situations.
The Pentagon's position is clear: "The principle behind the 'all lawful uses' contract language is that the Department will not let ANY company dictate the terms regarding how [it] make[s] operational decisions." Anthropic, meanwhile, insists its safeguards are necessary to prevent misuse of powerful AI systems.
A preliminary injunction hearing is scheduled for March 24 before Judge Rita F. Lin in San Francisco federal court.
Questions and Answers
Why did the Pentagon ban Anthropic?
The Pentagon designated Anthropic as a supply chain risk after the company refused to accept 'any lawful use' contract terms. Officials worried that Anthropic's ability to update or disable its AI models could pose operational risks during military operations, especially since Claude was embedded in classified Defense systems.
What does Anthropic's Usage Policy prohibit?
Anthropic's Usage Policy restricts uses involving surveillance, compromising computer systems, and designing weapons or systems intended to cause harm or loss of human life. The company views these restrictions as necessary safeguards aligned with its founding commitments to AI safety.
What is the legal basis for the Pentagon's action?
The Secretary of War invoked 10 U.S.C. § 3252, a statute that empowers the Department to designate supply chain risks and exclude companies from procurements involving national security systems. The law was created to address risks from vendors who could sabotage, introduce malicious code, or subvert critical systems.
How did Anthropic respond to the ban?
Anthropicfiled a lawsuit seeking a preliminary injunction to block implementation of both the Presidential Directive and the Secretarial Determination. The company argues the actions constitute retaliation for its public statements about AI safety and violate its First Amendment rights, Administrative Procedure Act protections, and due process guarantees.


