A statement made on Saturday by OpenAI claimed, external its agreement with the Pentagon had “more guardrails than any previous agreement for classified AI deployments, including Anthropic’s”.

But on Monday, Altman posted on X to say further changes were being made, including making sure its system would not be “intentionally used for domestic surveillance of U.S. persons and nationals”.

As part of the new amendments, intelligence agencies such as the National Security Agency would also not be able to use OpenAI’s system without a “follow-on modification” to the contract.

Altman added the company had made a mistake by rushing “to get this out on Friday”.

“The issues are super complex, and demand clear communication,” he said.

“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

OpenAI has faced backlash from users following its announcement it was working with the Pentagon.

Day-over-day uninstalls of the company’s Chat GPT mobile app reportedly surged, external to 295% on Saturday, compared to a typical 9%.

Meanwhile, Anthropic’s Claude rose to the top of Apple’s App Store ranking, where it still remains on Tuesday, external.

The AI model was blacklisted by the Trump adminstration following Anthropic’s refusal to drop a corporate “red-line” principle that its technology should not be used to create fully autonomous weapons.

Despite this, the use of Claude in the US-Israel war with Iran has since emerged, external, hours after Trump’s ban.

The Pentagon declined to comment on its dealings with Anthropic.