In the race to artificial superintelligence, Washington and Beijing are settling into a familiar, though dangerous, rhythm. Policy elites have coined it “Cold War 2.0”, implying that we have been here before and that it follows the old playbook of red lines, containment, and the grim stability of deterrence to keep us safe.
This assumption is a fatal error.
The rules of this race are violently different. The Cold War stayed cold because nuclear weapons created a stalemate where defence was impossible while a retaliatory strike was almost guaranteed. The AI race is destabilising because it promises the opposite: a decisive winner-takes-all first-mover advantage where the loser is unable to effectively react.
In fact, the gap between the winner and the loser widens exponentially. Once deployed, an artificial general intelligence (AGI) can self-improve at machine speed while simultaneously sabotaging the rival’s progress, effectively pulling up the ladder behind it. We are not heading towards a stalemate; we are entering a “suicide region” where the rational fear of being second overrides the imperative of safety, compelling both sides to race towards a cliff.
The mechanics of mutually assured destruction (MAD) that fostered stability in the 20th century are not built on goodwill or treaties. They are based on the physics of players’ second-strike capability.
We are not heading towards a stalemate but rather a suicide region where the rational fear of being second overrides the imperative of safety.
In 1960, if the Soviet Union launched a surprise nuclear attack, they knew that enough American submarines and hardened silos would survive to allow for immediate retaliation. This created the foundations of a mutually assured destruction equilibrium where neither side could neutralise the other to win a single decisive blow, thus resulting in a stalemate where stability is ushered in. The “monopolist interval” – the window in which one side could act with impunity – was effectively zero.
However, in the case of AGI, the players are not racing to build a bomb; they are racing to build a superintelligent mind. This drastically changes the physics of the game. A nuclear weapon is a static tool – a powerful hammer that requires a human hand to swing it. A superintelligence is an active agent – a grandmaster capable of anticipating moves, countering strategies, and rewriting the rules in real-time – offering the winner a decisive strategic advantage to dominate cyber-infrastructure, financial markets, and military command-and-control systems simultaneously.
Crucially, this advantage is likely total. If the US or China achieves AGI first, they could theoretically neutralise the rival’s ability to retaliate, locking them out of their own networks before they even know the race is over. This eliminates the second-strike capability that is foundational to the MAD doctrine.
In the AI winner-takes-all race, the loser is effectively disarmed the moment the winner crosses the finish line. This creates a terrifying, albeit natural conclusion for players: if they cannot guarantee they will survive an initial strike to retaliate, they cannot afford to wait. They must strike (deploy) first.
Even if the probability of catastrophe is high, states will still race because the alternative – guaranteed defeat and the loss of sovereignty – is viewed as the worse outcome.
The massive first-mover advantage of AGI transforms the equilibrium from a stalemate into a sprint. The monopolist interval is now near infinity for the winner and fatal for the loser. As such, the MAD logic dissolves and is replaced by the incentive for preemption: the necessity to launch an unfinished, unsafe AGI system simply to prevent the rival from deploying first.
This fatalistic feature of the current AGI race results in the strategic window we are currently trapped in – the “suicide region”. This is a zone where both Washington and Beijing understand that the technology is unsafe – where the risk of existential ruin is high – but the fear of the other side getting it first is higher.
In a normal market, if a product has a 50% chance of exploding, you don’t build it. But in a winner-takes-all game where the prize is, theoretically, the world, the calculus drastically changes.
The degree of supremacy conferred to the winner ensures that both players are compelled to race. And because the risk of extinction is shared globally while the prize of supremacy is exclusive to the winner, the rational move is to ignore the risk. It becomes a fixed condition of the game – a terrifying “cost of doing business”. The logic of the suicide region dictates that even if the probability of catastrophe is high, states will still race because the alternative – guaranteed defeat and the loss of sovereignty – is viewed as the worse outcome.
The result is a terrifying paradox: mutually assured destruction is no longer a deterrent; it is merely a baseline condition.
So, how do we escape? The first step is to abandon the comforting delusion that deterrence will save us. Building more chips and bigger models does not create stability. In a strict winner-takes-all game, it only accelerates the timeline to the crash. We survived the last arms race because the weapon itself and technological constraints made war irrational. We will only survive this one if we recognise that the nature of AGI offers no such safety valve.
To stop the race, we cannot rely on trust; we must rely on a fundamental restructuring of the game itself.