Unlock the Editor’s Digest for free

OpenAI has amended its contract with the US defence department just days after it was signed, with chief executive Sam Altman saying the rush to make a deal last week “looked opportunistic and sloppy”.

The company agreed terms with the Pentagon on Friday, handing over its AI models for use in classified military operations. The deal came hours after the collapse of negotiations between Anthropic, OpenAI’s rival, and defence secretary Pete Hegseth.

OpenAI claimed its agreement had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”.

But on Monday, Altman said the ChatGPT maker was working with the department to add terms to its contract to ensure “the AI system shall not be intentionally used for domestic surveillance of US persons and nationals”.

Intelligence services such as the National Security Agency will be excluded from the deal for the time being, he added.

OpenAI has come under pressure since signing its deal with the Pentagon on Friday. Employees at the company have voiced concerns internally, according to people familiar with the matter, and on social media.

At the weekend, chalk graffiti appeared outside OpenAI’s San Francisco office saying “NO TO MASS SURVEILLANCE” and urging staff to “Do the right thing!”

Anthropic and OpenAI have expressed similar concerns about the use of AI for surveillance and for weapons with no human oversight, leading to questions about how OpenAI managed to strike a deal with the Pentagon where Anthropic had failed.

OpenAI said it was satisfied it could retain its red lines around domestic surveillance and autonomous weapons with technical measures to stop misuse of its models. They include only deploying through the cloud rather than on computers installed on the military hardware that might carry out attacks and ensuring its employees are in the loop.

Altman also suggested he had more trust in existing laws. “Anthropic seemed more focused on specific prohibitions in the contract, rather than citing applicable laws, which we felt comfortable with,” he said on Saturday.

But on Monday, Altman acknowledged some of the concerns that Anthropic raised about how AI could enable mass data gathering.

“We shouldn’t have rushed to get this out on Friday. The issues are super complex, and demand clear communication,” wrote Altman. “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

The additional terms announced on Monday would “prohibit deliberate tracking, surveillance or monitoring of US persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information”.

Anthropic’s talks with the Pentagon collapsed after both sides failed to agree contract language. Dario Amodei, Anthropic’s chief executive, had laid out two red lines prohibiting the use of his company’s AI models for domestic mass surveillance or in lethal autonomous weapons.

Hegseth wanted the models available for “all lawful use”, but Anthropic executives argued current US law allowed for mass surveillance using AI tools. They pushed for contractual safeguards until new legislation was put in place.

The Pentagon signalled on Friday it was open to removing phrases from the contract that Anthropic felt left too much to interpretation. Senior figures at the company felt a deal was close, according to a person with direct knowledge of the talks.

But negotiations fell apart shortly afterwards, with the parties failing to agree terms around the mass collection of publicly available data, said the person.

The Trump administration has since threatened to cut Anthropic from government contracts and the Pentagon’s supply chain. The US Treasury, Federal Housing Finance Agency and government-backed mortgage giants Fannie Mae and Freddie Mac all announced they would end Anthropic contracts on Monday.

Retired US general Paul Nakasone, who serves on the OpenAI board, cautioned against a divide between the tech industry and defence department.

“This technology needs to be utilised by democracies,” he told the FT. “I think it has to be a partnership with a number of different private companies.”