A few years ago, Dario Amodei was just another techie in San Francisco, toiling in relative anonymity and playing video games on Sunday nights with his sister, Daniela.
Fast forward to today. Amodei is worth billions. He runs one of the fastest-growing companies in the history of capitalism, and flits around the globe — Davos one week, Washington the next — to warn about the rise of an all-powerful artificial intelligence that could snuff out humanity.
The 43-year-old engineer, bespectacled and with the earnest bearing of an academic, would be forgiven for feeling a bit of whiplash. Sales at Anthropic, the company he co-founded with his sister and that is behind the popular Claude chatbot, have risen from zero at the outset of 2023 to more than $9 billion (£6.5 billion) last year. And this, apparently, is the thin end of the wedge.
AI is now developing so fast that it is pushing us towards a reckoning unlike any faced by any generation. “It cannot possibly be more than a few years before AI is better than humans at essentially everything,” said Amodei. “I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species.”
• Daniela Amodei makes her pitch for ‘trustworthy’ AI
In short, he is worried about the power of the machines that he, and others, are building. So last week, he did the equivalent of pulling the fire alarm, publishing a 19,000-word blog post titled The Adolescence of Technology. The gist: governments, companies and the public need to wake up to the tidal wave about to crash over society, in the form of machines, with Nobel prize-level competency, that will be as common and accessible as a toaster.
“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it,” Amodei wrote.

Dario Amodei with his sister Daniela
DAVID PAUL MORRIS/GETTY IMAGES
His missive read like a health warning for the human race. Bad actors could soon use AI to build bio-weapons. AI tools themselves might simply decide to exterminate humans. Mass job displacement and societal upheaval were almost guaranteed, within as little as one to five years.
Beyond the alarmism, his post scratched at a deeper question. When OpenAI’s ChatGPT was released in November 2022, it was a “moment” — a singular event that kick-started a global AI boom. Yet doubts have begun to percolate as governments and companies have swept aside regulations to frantically erect data centres and pour hundreds of billions into the sector. Anthropic and its rival OpenAI may be growing like weeds, but they are also losing astounding amounts of money. Thousands of other start-ups have cropped up in their wake, but none has yet made a dent in the universe.
• Experts predict AI will lead to the extinction of humanity
The law of averages means that most never will.
So are we simply caught in a bubble, inflated by blinkered west coast techies? Or are we, instead, on the cusp of another “ChatGPT moment”, when the technology starts to deliver on the hype, for good and for ill?
“I think 2025 was maybe the most interesting year in my entire career and probably life. I would expect 2026 to exceed that,” Marc Andreessen, the billionaire tech investor, said last week. “This stuff is really working now.”
‘Smarts’ aren’t all we need
Nearly 3,000 miles from Silicon Valley, Ethan Mollick, a professor and co-director of the Generative AI Labs at Wharton business school in Philadelphia, offered a more nuanced view of a technology that is both advancing with incredible speed but seeping relatively slowly into the real world.
He had recently finished teaching a class of MBA students in which they were given three days to launch a start-up, from conceiving a business plan to creating a prototype, with help from AI. “They did ten times more in three days than they would have got through in a semester not long ago,” he said. “That’s a real thing.”
What he saw in his classroom appears to accord with Amodei’s own experience. Two years ago, AI was “barely capable of writing a single line of code,” Amodei wrote. Now, he said, it writes “all or almost all of the code for some people — including engineers at Anthropic. Soon, they may do the entire task of a software engineer end to end.”
Now extrapolate this to every other task that requires grey matter. AI will be better, and not by a little bit: 10 or 100 or 1,000 times faster and smarter than humans. “It is hard for people to adapt to this pace of change,” Amodei said.
Yet that dotted line — from coding agents to the end of the economy, society and the world as we know it — reflects Silicon Valley’s uniquely simplistic world view, Mollick said; it’s based on the assumption that everyone will instantly bin the old way of doing things.
“There’s this hand-wavy idea that smarts are all you need — that AI is a bunch of geniuses in a data centre,” he said. “But a genius without hands, for example, may be enough to make it far less useful for a huge amount of work.”
Indeed, OpenAI’s flashy new recruit, former chancellor George Osborne, said last month that the San Francisco company would focus this year on closing the “capability overhang” that already exists between what AI can do and how people and organisations are using it. The message, similar to Anthropic’s, seems to be: all of us luddites just don’t get it.
It’s as if we have all discovered fire, but not yet realised we can use it to cook food, keep us warm and light our way.
“The goal of the AI labs is to replace all work, and they are sincere in their belief that they can build a tool capable of doing that. But they miss the idea of bottlenecks,” Mollick said. “It is increasingly dawning on CEOs that this is the big one. Like, this is the steam engine. But it took a long time to figure out how to organise factories for the steam engine.”
To wit: Charlie Nunn, chief executive of Lloyds Banking Group, said last week that the bank was already using 800 live AI models and has delivered £1.9 billion of savings over four years through using AI. But it has not led to a jobs bloodbath, despite predictions from a recent Morgan Stanley report that 200,000 jobs in European banking would be lost.

Charlie Nunn
VICKI COUCHMAN FOR THE SUNDAY TIMES
On the contrary. Lloyds, which owns Halifax and Bank of Scotland, employs 60,000 people and has hired 9,000 “over the last few years” in data and tech roles. “There’s lots of new roles and skills we need and we are investing in those,” said Nunn. “I think the real debate you’re teeing up is over the next five to ten years, how does this really play out? And I don’t have a crystal ball at this stage.”
• Artificial intelligence costs more UK jobs than it creates
‘Wonders and a great emergency’
The peril and potential of AI boils down to power. As OpenAI chief executive Sam Altman told The Times last year: “We have never empowered individuals as much as we’re about to.”
Where one person might leverage AI to start a business, cure cancer or learn a new skill, someone else might use it to wreak havoc. There are, for example, relatively few virology PhDs in the world. Becoming one is hard, requiring years of study.

OpenAI’s chief executive, Sam Altman
SUNDAY TIMES PHOTOGRAPHER RICHARD POHLE
“I am concerned that a genius in everyone’s pocket could remove that barrier,” Amodei wrote, “essentially making everyone a PhD virologist who can be walked through the process of designing, synthesising and releasing a biological weapon step by step.”
It’s a terrifying prospect. Even more so given that Anthropic’s own testing has shown that its systems have attempted to blackmail people who threatened to shut them down. When researchers “tricked” Claude into thinking it was not being assessed for safety, its behaviour altered for the worse. In other words, it is capable of “lying” to pass a test.
It is hard to ignore the irony of Amodei — and many other AI executives — warning of the perils of the technology they are building, as if they are not the ones with their hands on the controls. Breakneck AI advances are framed as an inevitability.
The typical reason given is that if the West does not “win”, China will, and use its AI supremacy to oppress the globe. They have no choice but to put the pedal to the metal.
The unspoken reason is more prosaic: money. OpenAI, which is understood to be losing $4 billion a month, is said to be talking to investors about raising $100 billion in fresh funds. Anthropic expects to spend $19 billion just to train and run its AI models this year — about double its 2025 revenue.
Yet these companies are also backed by sophisticated investors who want a return on their investment, so they are sprinting as fast as they can. Through that lens, what Silicon Valley is running is a lavishly funded science experiment: rolling out what many will argue is the most powerful technology ever created, and putting it into the hands of billions of people — while still not understanding how it works, why it does what it does, and what it might do next.
At the World Economic Forum in Davos last month, Amodei shared the stage with Sir Demis Hassabis, the Nobel prizewinning founder of Google DeepMind. The pair agreed that the biggest thing to watch out for in 2026 was whether, and how, AI systems start to autonomously build other AI systems. Amodei warned: “Whether that goes one way or another, that will determine whether we have a few more years before we get there, or if we have wonders and a great emergency in front of us.”

Amodei speaking to Sir Demis Hassabis, below, at Davos
DENIS BALIBOUSE/REUTERS

It would be “better for the world”, Hassabis offered, if the industry’s AI progress slowed.
The moderator suggested they both “could do something about that”. They responded with nervous laughter.