Being born in 1980, I don’t remember much about the Cold War. But I remember that sense of impotence: the fear that people very far away from me could make a decision, or just a mistake, that would end everything I knew. And recently I’ve started to have that feeling again.

The other week Dario Amodei, the head of Anthropic, published a 19,000-word essay on the looming dangers of artificial intelligence. As Danny Fortson wrote in The Sunday Times, it was a powerful, but also deeply unsettling, read.

“I believe we are entering a rite of passage … which will test who we are as a species,” said Amodei. And: “AI is so powerful, such a glittering prize, that it is very difficult for human civilisation to impose any restraints on it at all.” He warned that “the years in front of us will be impossibly hard” and concluded that “humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it”.

This is not, to put it it mildly, what you normally hear from the chief executive of a $350 billion start-up. But then Anthropic is worth $350 billion precisely because investors believe the technology Amodei and his rivals are developing could change everything. In particular, Amodei thinks we may soon reach the point where a single AI data centre has the intelligence, and capacity, of an entire country of geniuses.

The essay is worth reading in full. In fact I’d argue it’s essential that people do so. But, to boil it down, he identifies three core risks.

First, AI agents doing bad things of their own volition either deliberately — the kind of thing familiar from a thousand sci-fi movies — or through misaligned reward mechanisms. The classic thought experiment here is asking a sufficiently capable AI to maximise paperclip production, and having it convert the entire planet into a paperclip factory, never mind the consequences for the squishy humans.

Second, there’s the risk of people using AI to do bad things, whether that’s building nuclear bombs, distributing new and deadly biotoxins or creating despicable systems of autocratic control. Amodei conjures up the prospect of advanced AI being used to propagandise populations, direct swarms of armed drones or monitor every piece of communication and conversation for disloyalty to the regime.

But while Amodei describes AI — the same AI he is busy building — as “the single most serious national security threat we’ve faced in a century”, he argues that there is a third great risk. Even if we navigate the security implications of AI, we are not remotely ready for the wider consequences, whether that be mass unemployment, hyperconcentration of wealth or acute social and economic disruption.

Some of this change will be sociological — even psychological. Jonathan Haidt, the academic who has led the charge against smartphones for under-16s, argues that where social media hacked our attention, AI is hacking our attachment. Already, there are stories of people doing horrible things after becoming emotionally dependent on their comforting, friendly chatbots, which all too often abet them not just in their decisions but their delusions.

Then there’s the technology itself. In just the last few days the launch of an AI-only social media network, Moltbook, transfixed the world’s press — though no one could quite decide whether it was the dawn of a robot revolution or a bunch of bots cosplaying at being as stupid as humans.

Meanwhile, the valuations of the big tech firms have been seesawing by hundreds of billions with every hint to investors that they are edging ahead or behind in the AI race, that they are spending too much or too little. Last week software and services stocks lost $830 billion in six days after the news broke that the AI tools developed by Amodei’s firm could carry out legal tasks such as reviewing documents.

This is, in other words, by far the biggest story in the world. Google has just confirmed that it will double its capital spending to as much as $185 billion this year. That’s more cash in a single year than the entire value of Unilever, BP, Rolls-Royce, Barclays, GSK or all but a handful of Britain’s corporate champions.

Which brings us — grimly, depressingly — back to the UK. Where it often feels as if the AI revolution isn’t even happening. Indeed, when I wrote about it in September, the response was pretty much a shrug: that’s the weird thing those weird Americans are doing over there.

I have severe doubts — as does Amodei — about the willingness or ability of the US government to properly regulate this technology, particularly while it is engaged in a digital arms race with China. But I throw my hands up in despair at Britain’s typical bumbling and bimbling.

Yes, we have designated “AI growth zones” — but we’re still relying on Ed Miliband to provide the power for their data centres, at sky-high prices. Likewise, the government has promised an “AI revolution” in schools — but the package of £187 million to “bring digital skills and AI learning into classrooms and communities” looks a bit less impressive when you consider that it amounts to roughly £20 per pupil.

And there’s no sign at all that we’re grappling with the wider implications of AI. Should kids still be doing coding classes, given that traditional coding has essentially evaporated? Even the best programmers in the world are now telling AI to write the apps for them. In fact that was one of the causes of that stock market correction: the implications for software firms’ ability to sell expensive subscriptions to their corporate clients. (And, yes, having AI write the code that makes new AI is one of the many contributors to my insomnia.)

In a way, the AI story is kind of a metaphor for the sheer inability of the Starmer government, and Keir Starmer himself, to make anything seem exciting or dynamic — to provide a sense of story, of a mission animating either the government or the prime minister. But it’s also a metaphor for Britain and its place in the world.

I don’t know what kind of world AI will make. I don’t know what it will mean for my children. But even though Rishi Sunak tried his best to make Britain a frontier nation on AI security, even though many of the top researchers (including Google’s) are based here, it still feels as though we, our government and our economy are largely peripheral to the revolution that is under way — and becoming ever more so.

Sleep tight, everyone.