The Department of War filed a blistering legal rebuttal this week accusing Anthropic of posing an "unacceptable national security threat" by refusing to let the military use its Claude AI model without restrictions.
In a 40-page court filing submitted March 17, 2026, Pentagon attorneys argued that Anthropic could "attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations" if the company felt its ethical red lines were being crossed.
The legal battle erupted after Anthropic refused the Pentagon's request to include "all lawful use" language in its AI service contracts. The company's usage policy currently prohibits uses involving surveillance, compromising computer systems, and "designing weapons or other systems to cause harm or loss of human life."
Presidential Directive Bans Anthropic
On February 27, 2026, President Trump posted on social media that Anthropic was "putting AMERICAN LIVES at risk" by insisting the Department of War "obey" the company's terms of service. He directed all federal agencies to cease using Anthropic's technology within six months.
The same day, the Secretary of War designated Anthropic as a "supply chain risk" under federal procurement law, effectively barring the company from contracts involving national security systems.
Claude is currently the Pentagon's most widely deployed AI model and the only one embedded in classified systems, according to court documents.
"AI Systems Are Acutely Vulnerable"
The Pentagon's filing reveals deep concerns about Anthropic's technical access to military infrastructure. Unlike traditional software, AI models require "constant tuning" and are "acutely vulnerable to manipulation" by developers with privileged access, the Under Secretary of War for Research and Engineering wrote in an analysis attached to the filing.
The document describes a hypothetical scenario where "a critical defense system failing to engage due to an unapproved, vendor-side modification" during active combat operations.
Pentagon officials also cited an incident during recent military operations when "Anthropic leadership questioned the use of their technology in warfighting systems" despite the use being "clearly permitted" under existing contracts with defense contractor Palantir.
Anthropic Seeks Injunction
Anthropic filed suit seeking a preliminary injunction to block the presidential directive and force the government to continue using its products. The company argues the actions constitute retaliation for its public stance on AI safety.
In court filings, Anthropic characterized itself as "a leading voice on issues related to AI safety and policy" whose "founding commitments" include responsible AI development.
But Pentagon attorneys rejected that framing entirely. "Anthropic concedes the Government's right not to use Anthropic's services," the filing states, yet the company "nonetheless seeks a preliminary injunction to effectively bar the Government from doing just that."
The Bigger Stakes
The case represents a fundamental clash over who controls AI used by the military. The Pentagon argues that allowing a private company to restrict how the government uses AI tools would give that company "veto power over the operational decisions of the United States military."
Anthropicoffers a counter-narrative: that imposing usage restrictions is a responsible way to prevent AI from being used for mass surveillance of Americans or fully autonomous weapons that operate without human oversight.
The Pentagon filing insists it "has no interest in using AI to conduct mass surveillance of Americans (which is illegal)" and doesn't want autonomous weapons "that operate without human involvement."
But the department demands the principle that "it will not let ANY company dictate the terms regarding how it makes operational decisions," according to internal communications cited in the filing.
A preliminary hearing is scheduled for March 24, 2026 before Judge Rita F. Lin in San Francisco.
Questions and Answers
What is the lawsuit about?
Anthropic is suing the Pentagon after being designated a "supply chain risk" and banned from federal contracts. The company refused the Pentagon's demand to allow "all lawful use" of its Claude AI model, citing ethical restrictions on surveillance and weapons development.
Why did the Pentagon ban Anthropic?
The Pentagon argues Anthropic poses a national security threat because the company could theoretically disable or alter its AI models during military operations if it disagreed with how they were being used. The Department of War says AI requires constant vendor access and tuning, making it vulnerable to manipulation.
What does Anthropic's usage policy restrict?
Anthropic's terms of service prohibit using Claude for surveillance, compromising computer systems or networks, and designing weapons or systems intended to cause harm or loss of human life. The company says these restrictions reflect its commitment to responsible AI development.
What happens next?
A federal judge will hear arguments on March 24, 2026 on whether to grant Anthropic's request for a preliminary injunction that would force the government to continue using its AI products while the case proceeds.






