OpenAI has reached an agreement with the Pentagon to provide its artificial intelligence systems hours after President Trump ordered US government agencies to cease all use of Anthropic’s products, following a simmering row with the technology company.
Announcing the decision to halt the use of Anthropic on Friday, Trump said on Truth Social: “Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War”. There will be a six-month phase-out for agencies such as the US Department of War that use the company’s products, he added.
“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Trump wrote.
Later that evening, the head of OpenAI — the makers of ChatGPT and Anthropic’s primary commercial rival — said the company had agreed a deal with the Pentagon, respecting red lines that appeared similar to those that Anthropic had insisted on, and which the Trump administration previously denounced as “woke”.
Anthropic’s AI software, Claude, had been the only advanced model that was approved for US classified systems. But as it renegotiated a contract with the US Department of War, Anthropic baulked at the Pentagon’s demand that the government can make “any lawful use” of its AI.
According to Anthropic, the move would remove safeguards in Claude’s policy that prevent it being used to facilitate violence, develop weapons or conduct surveillance.

Pete Hegseth, left, with John Ratcliffe, director of the CIA, and President Trump at Mar-a-Lago last month
MOLLY RILEY/THE WHITE HOUSE/AP
The disagreement has been public for several weeks. Pete Hegseth, the secretary of war, designated Anthropic a “supply-chain risk to national security” on Friday.
Supply-chain risk is a status given to firms linked to foreign adversaries, such as the Chinese company Huawei, and has never been applied to a US entity. It will prevent Anthropic from working with the government and jeopardise its contracts, worth about $200 million.
Anthropic vowed to challenge the decision in court, adding in a statement: “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons.”
The company said it refused to give in to the Pentagon’s demands for “two reasons”: because it “does not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons” and because the “mass domestic surveillance of Americans constitutes a violation of fundamental rights”.
Posting about OpenAI’s agreement with the Trump administration, Sam Altman, the company’s chief executive, posted on X on Friday night: “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.”
In what could represent a major concession, Altman said the government agreed “with these principles, and we put them into our agreement”.
Replying to Altman’s social media post, the under secretary of war Emil Michael said his department was looking forward to working with a “reliable and steady partner that engages in good faith”.
Hegseth had given Anthropic a deadline of 5pm on Friday to comply, after tense meetings at which US officials apparently questioned whether the company would assist the military in the event of a hypothetical nuclear strike against the United States. According to The Washington Post, the two sides also clashed over how Claude had been used by US forces in last month’s capture of the Venezuelan leader Nicolás Maduro, through its partnership with Palantir, another AI company.
The Trump administration’s feud with Anthropic had initially united two of the most prominent AI leaders.
Altman backed his rival, Dario Amodei, the co-founder of Anthropic, after the latter accused the Trump administration of wanting to exploit the company’s technology for mass domestic surveillance and fully autonomous weapons.

India’s prime minister, Narendra Modi, with Sam Altman and Dario Amodei — refusing to hold hands — at the AI Impact Summit in Delhi last week
LUDOVIC MARIN/AFP/GETTY IMAGES
On Friday, Amodei received unlikely backing from Altman, 40, who told CNBC: “I don’t personally think that the Pentagon should be threatening DPA against these companies. For all the differences I have with Anthropic, I mostly trust them as a company and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters. I’m not sure where this is going to go.”
Altman’s intervention threatened to escalate the row into one between the Trump administration and the American AI industry, as Google, which produces Gemini, is also considering the Pentagon’s terms.
Hundreds of employees from OpenAI and Google signed an open letter backing Anthropic’s stance and urging their companies to do the same.
• Data and drones will win future wars — can Britain keep up?
Late on Thursday, Amodei, 42, released an 800-word statement rejecting the government’s position.
“We believe AI can undermine, rather than defend, democratic values,” he said. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do. Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: mass domestic surveillance and fully autonomous weapons.”

Amodei is standing firm against the Pentagon
BHAWIKA CHHABRA/REUTERS
He added: “Regardless, these threats do not change our position: we cannot in good conscience accede to their request.”
Michael, the under-secretary of defence, posted in response on X that Amodei was a “liar”, had “a God complex” and “wants nothing more than to try to personally control the US military and is OK putting our nation’s safety at risk”.
Amodei and his company have previously fallen foul of the Trump administration for supporting the Democratic Party and for pursuing a cautious approach to the ethics around AI development. David Sacks, Trump’s “AI and crypto” tsar, has branded the company woke for its pro-safety positioning.

David Sacks and Trump in January. Sacks has criticised Anthropic
BEN CURTIS/AP
Altman wrote an email to staff, seen by The Wall Street Journal, saying on Thursday: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
By contrast, Elon Musk’s xAI company has agreed to the “all lawful uses” policy, but is so far the only developer to do so. Musk posted on X that “Anthropic hates Western Civilisation” in reaction to Amodei’s statement.
• Anthropic AI boss summoned to Pentagon showdown in weapons row
Anthropic was set up by Amodei, his sister Daniela and others after they left OpenAI because of disputes with Altman over safety. That rift persisted at the recent India AI Impact summit, where Altman and Amodei refused to hold hands.
Anthropic has always portrayed itself as a leader in “responsible AI”, but recently the shine has come off that image. This week it dropped a commitment to stop developing its software unless specific safety guidelines were met. It has previously been criticised for Project Panama, its secret plan to train Claude by digitising millions of books. It proposed buying them up, slicing off the spines and scanning them before they were destroyed. Legal papers unsealed in January said: “We don’t want it be be known we are working on this.”
Last week an Anthropic researcher working on safety quit to become a poet, hinting in a cryptic post on X that the company’s values were not being followed.
The developments have reinforced fears from campaigners that safety and checks are being sacrificed in the AI race. Jack Shanahan, a former US Air Force general who ran the Pentagon’s AI unit from 2018 to 2020, posted on X that “you won’t find a system with wider & deeper reach” across America’s military than Claude, but he criticised the Pentagon’s apparent goals.

The Pentagon is trying to negotiate with other AI companies. Only one — xAI, owned by Elon Musk — has agreed to its terms on “all lawful uses” of the technology
GETTY IMAGES
“No LLM [large language model], anywhere, in its current form, should be considered for use in a fully lethal autonomous weapon system. It’s ludicrous even to suggest it,” he said. “So making this a company red line seems reasonable to me.”
Michael told CBS that the administration had offered to put in writing that federal laws restricted the military from surveillance on Americans and that final targeting decisions without human involvement were barred by law.
However, he added: “At some level, you have to trust your military to do the right thing. But we do have to be prepared for the future. We do have to be prepared for what China is doing. So we’ll never say that we’re not going to be able to defend ourselves in writing to a company.”