The brewing conflict between the US military and Anthropic escalated over the weekend, when senior administration officials told Axios they were considering banning the Silicon Valley startup from use in the military.

But the roots of the conflict, which began in early January, are in the changing nature of the software stacks used by the Pentagon. As AI models become more powerful and general purpose, the same underlying models that power consumer chatbots could one day make life and death decisions on the battlefield, raising new ethical and technical questions.

Anthropic is one of the few “frontier” large language models available for classified use by the US government because it is available through Amazon’s Top Secret Cloud and through Palantir’s Artificial Intelligence Platform, which is how its Claude chatbot ended up appearing on the screens of officials who were monitoring the seizure of then-Venezuelan President Nicolás Maduro.

The raid, condemned by many Democrats as lawless, came amid a growing resurgence of activism in Silicon Valley around the use of its products by the US government. Palantir has faced pressure in the UK and Europe over the use of its tools by immigration officials.

“The Department of War’s relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” said Chief Pentagon Spokesman Sean Parnell in a statement to Semafor.

Soon after the Maduro raid, during a regular check-in that Palantir holds with Anthropic, an Anthropic official discussed the operation with a Palantir senior executive, who gathered from the exchange that the AI startup disapproved of its technology being used for that purpose.

The Palantir executive was alarmed by the implication of Anthropic’s inquiry that the company might resist the use of its technology in a US military operation, and reported the conversation back to the Pentagon, a senior Defense Department official said.

That exchange led to a rupture in Anthropic’s relationship with the Pentagon, according to several people briefed on the matter. Semafor previously reported that on January 12, Defense Secretary Pete Hegseth jabbed Anthropic in a speech announcing the Pentagon’s new genai.mil platform, which allows Pentagon officials to use AI models from Google, OpenAI, and xAI for nonclassified purposes.

“We will not employ AI models that won’t allow you to fight wars,” Hegseth said, in a veiled reference to Anthropic.

An Anthropic spokesman called the account of the exchange between the company and Palantir as “false.” The spokesman said the company has not “discussed this with, or expressed concerns to, any industry partners outside of routine discussions on strictly technical matters.”

“Anthropic is committed to using frontier AI in support of US national security. That’s why we were the first frontier AI company to put our models on classified networks and the first to provide customized models for national security customers. Claude is used for a wide variety of intelligence-related use cases across the government, including the DoW, in line with our Usage Policy,” the spokesman said.

Anthropic has not agreed to sign an “all lawful uses” contract with the Pentagon, which would allow Claude’s use without any restrictions. Anthropic wants carve-outs that prohibit certain surveillance and autonomous weapons restrictions, according to people familiar with the matter.

Since then, the relationship between Anthropic and the Pentagon has deteriorated, according to people familiar with the matter. The Defense Department official told Semafor that the military is beginning to lose trust in Anthropic, viewing their models as a possible “supply chain risk,” and making hazy threats about barring subcontractors (like Palantir) from using them.

An official designation like that, which would be a rare move by the Pentagon, could scare even private sector customers away from Anthropic and threaten its business prospects just as the company prepares for an initial public offering later this year.

Behind the scenes, the two sides are still negotiating terms for a contract. “We are having productive conversations, in good faith, with DoW on how to continue that work and get these complex issues right,” the Anthropic spokesman said.