Anthropic said that it did work with Hegseth on revising contract language in order to meet military use needs. While it was nearing a successful negotiation to continue working with the department that would include limitations regarding surveillance and weaponry, those talks were abruptly undercut.

Instead, the Department of Defense “met Anthropic’s attempts at compromise with public castigation”.

As Anthropic was negotiating with defence officials, Trump berated the company as run by “left wing nut jobs” and directed all government agencies to stop using Anthropic tools.

Hegseth quickly followed up on Trump’s announcement by labeling Anthropic a “supply chain risk”, meaning tools like Claude were suddenly considered not secure enough for government use. He also prohibited any company doing work with the government from using Anthropic tools.

Claude is one of the most popular AI tools in the world, with Claude Code being an almost ubiquitous part of work done by some of the biggest technology firms in the US, including Google, Meta, Amazon and Microsoft.

Those companies also do work with the government. Last week, Microsoft, Google and Amazon said they would continue to use Claude outside of any work for defence agencies.

Nevertheless, Anthropic claims that it has been “irreparably” harmed as a result of Trump and Hegseth’s comments.

“Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term”, the company said. “On top of those immediate economic harms, Anthropic’s reputation and core First Amendment freedoms are under attack.”

Anthropic also noted the “chilling effect” on free speech that the retaliation by the Trump Administration is having on other entities.

But by Monday afternoon, nearly 40 Google and OpenAI employees had filed with the court a brief supporting Anthropic and its efforts to limit improper uses of AI, offering their expertise on the dangers posed the technology being used at scale.

“As a group, we are diverse in our politics and philosophies, but we are united in the conviction that today’s frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions,” the signatories of the brief said.

Google and OpenAI are both considered rivals to Anthropic when it comes to AI tools, and both companies also have such tools in government use.

OpenAI CEO Sam Altman admitted last week to rushing through the company’s new contract with the Department of Defense in the wake of Anthropic’s fallout with the government.

Anthropic is not seeking monetary damages from its lawsuit, but it is asking the court to immediately declare that Trump’s directive “exceeds the president’s authority” and is in violation of the Constitution, and immediately reject its having been labelled as supply chain risk.

Carl Tobias, a chair at the University of Richmond School of Law, said that while a quick settlement of the lawsuit is possible, he expects the Trump Administration to take “a scorched earth” approach.

“Anthropic may very well win in federal court, but this government is not shy about appealing,” Tobias said. “It will probably go to the Supreme Court.”