The U.S. government said on Tuesday that it had deemed the artificial intelligence company Anthropic an “unacceptable risk” to national security because the start-up could disable or alter its technology to suit its own interests, rather than the country’s priorities, in a time of war.
In a 40-page filing in U.S. District Court for the Northern District of California, lawyers for the government said they questioned whether Anthropic was a “trusted partner,” especially given that A.I. systems “are acutely vulnerable to manipulation.”
Giving Anthropic access to the Department of Defense’s warfighting infrastructure would therefore “introduce unacceptable risk into DoW supply chains,” the government said, referring to the Department of War, which is the Trump administration’s favored term.
Anthropic did not immediately have a comment.
The filing was the government’s first response to lawsuits from Anthropic, a leading A.I. company based in San Francisco that makes the Claude chatbot. On March 9, Anthropic filed two lawsuits — one in the same court and the other in the U.S. Court of Appeals for the District of Columbia Circuit — to challenge Defense Secretary Pete Hegseth’s decision last month to label it a “supply chain risk.”
Mr. Hegseth acted after the Pentagon battled with the company over a $200 million contract for the use of A.I. in classified systems. During negotiations for the contract, Anthropic had said it did not want its A.I. used for mass surveillance of Americans or with autonomous lethal weapons. The Pentagon countered that it was not up to a private company to tell it how to use the technology.
When the two sides could not agree, Mr. Hegseth said Anthropic posed a supply chain risk, a move that effectively cuts the company off from working with the U.S. government. The label was previously used only to bar foreign companies that posed a national security risk.
In its lawsuits, Anthropic accused the Pentagon of using the label to punish it on ideological grounds. The company has asked the judge in the California court to block the government’s designation. More than 100 enterprise customers might stop working with Anthropic because of the risk designation, the company has said, potentially leading it to lose billions of dollars in revenue.
A hearing on Anthropic’s request for a preliminary injunction is set for next Tuesday.
Anthropic began providing its A.I. technology last year in a pilot program established by the Pentagon. Two defense officials said the military had continued using Anthropic to help analyze intelligence as the war in Iran entered its third week.
Other tech companies and legal rights groups have filed legal briefs to support Anthropic in its lawsuits. On Monday, the American Civil Liberties Union and the Center for Democracy and Technology filed a brief arguing that Anthropic was protected by the First Amendment in speaking up against the Pentagon about its A.I. technology.
Microsoft also filed a friend-of-the-court brief, urging a federal court to temporarily block the Pentagon’s designation of Anthropic as a supply chain risk. Thirty-seven engineers and researchers from OpenAI and Google, including Jeff Dean, Google’s chief scientist, also filed a brief supporting Anthropic.