The gap between the speed of development in artificial intelligence and the strength of governance has always been worrying. The extraordinary standoff between Anthropic and the US government suggests it is getting worse. AI is no longer a subject for science fiction speculation. It is being used right now to select bombing targets in Iran and to process intelligence at a scale no human analyst could match.

The broad outlines of the dispute are striking enough. Anthropic, maker of the Claude AI model, refused to sign a Pentagon contract that it said would have permitted its technology to be used for domestic mass surveillance of US citizens and for autonomous weapons systems capable of killing people without human authorisation. For this, the Trump administration designated the company a “supply chain risk” and ordered its agencies to sever ties. The designation, unprecedented for an American company, is a significant act of economic coercion against a private firm.

Anthropic’s main competitor, OpenAI, then stepped in within hours to sign its own deal with the Pentagon, claiming to have secured the same protections Anthropic could not. Under pressure from its own employees, the company published selected contract language and subsequently amended its terms. But it declines to release the full agreement and the assurances offered look like the same fig leaves that US national security institutions have historically treated as optional.

The deeper problem is not just which company behaved better. This is a situation in which fateful decisions over the use of lethal AI are being negotiated in private between technology corporations and the defence establishment, with no meaningful oversight, no agreed international framework and no settled definition of what an autonomous weapon even is. On one side are executives whose instinct for survival in a competitive market is at least as powerful as their ethical commitments. On the other is an administration that has shown contempt for accountability. The philosophical and moral questions here are long-term and profound. The people answering them are not.