Stuart Russell, the British expert on artificial intelligence who has long warned about the dangers of failing to make the technology safe, says even the boss of one of the world’s major AI companies told him he is frightened of the consequences of a machine running amok. He cannot slow down development of the tech, however, because then his company might be overtaken by its rivals.

“I talked to one of the CEOs, I won’t say which one, but their view is, ‘It’s an arms race. Any one of us can’t pull out. Only the government can put a stop to this arms race by insisting on effective regulation.’ But he doesn’t think that’s going to happen unless there’s a Chernobyl-scale disaster,” Russell says.

Such a disaster could come with the creation of artificial general intelligence (AGI) that matches and then potentially exceeds the human mind’s full capabilities — a development Russell views as an existential threat to humankind.

‘I hired a million of the world’s smartest people to fact-check AI’

Possible scenarios Russell sketches out include a co-ordinated trading attack on financial markets that causes a global recession, cyberattacks that bring down global communication systems, war or civil conflict triggered by the influencing of human opinions, and a small engineered pandemic.

“These could be initiated by humans using AI as a tool, or by AI systems as a form of retaliatory warning to humanity if we try to shut them down,” he says. “Each of these scenarios could result in thousands or millions of deaths, either directly or indirectly (through economic collapse) and cost anywhere from several hundred billion dollars to trillions of dollars.”

The view of the AI boss, he says, is that something like this is “the best we can hope for”.

“Not that it would be pleasant, but that’s the only way we’re going to get the regulation,” Russell adds. “And without the regulation, we’re heading towards a much bigger disaster.” That disaster would be the end of humanity.

The chief executive is very concerned about a Chernobyl-level event. “But if they try to pull out of the race or slow down, they’ll just get replaced. Because the investors want to win.”

Russell, 63, is one of the world’s leading authorities on AI. A professor of computer science at the University of California at Berkeley, where he founded the Center for Human-Compatible Artificial Intelligence, he is also a fellow of Wadham College, Oxford. He has advised the United Nations and many governments and is the co-author of the standard university textbook on AI.

The creation of superintelligent AI, which exceeds our own intelligence, “would be the biggest event in human history”, he once said, “and perhaps the last event in human history”. He is president of the International Association for Safe and Ethical AI, which will hold its second annual meeting in Paris in February.

Stuart Jonathan Russell, a British computer scientist and professor, standing with hands in pockets.

TIMES PHOTOGRAPHER RICHARD POHLE

Four years ago I asked Russell how worried he was about the arrival of artificial intelligence that posed an existential threat. It was not a “visceral fear”, he said, comparing his concern to how he regarded the advance of climate change. And now? “It feels quite a lot closer.”

A great deal has happened in those years, notably the release in 2023 of GPT-4, which experts claimed showed “sparks of artificial general intelligence”.

Sam Altman, the chief executive of OpenAI, the developer of ChatGPT, has said that AI is a threat to human civilisation. Dario Amodei, chief executive of Anthropic, the company that makes the Claude AI model, was asked what was his P(doom) number, the probability that AI would cause catastrophic harm to humanity. He said 25 per cent. The Google chief executive, Sundar Pichai, said 10 per cent. Elon Musk put his at 20 per cent last year.

“If we think an acceptable chance of a nuclear meltdown is one in ten million per year, then an acceptable chance of extinction has got to be one in 100 million [to] one in a billion. So our AI systems are 100,000 to a million times too dangerous to allow,” Russell says.

In 2023 Altman, Amodei and many other AI leaders signed a letter which said that mitigation of the risk of extinction from AI should be a global priority.

Sam Altman, creator of Open AI, speaking during a technology podcast recording.

Sam Altman,

TIMES PHOTOGRAPHER RICHARD POHLE

However, Altman and Amodei did not join 800 other signatories, including Russell, in a letter in October this year calling for a ban on the development of superintelligent AI until it could be realised safely. “The investors are not going to tolerate anyone who has second thoughts about this,” Russell says.

From urban decay to fabulous wealth, how AI revived San Francisco

The billionaire Musk thinks Russell is “great” and posted on X to recommend Russell’s 2019 book Human Compatible, about the problem of controlling AI. Although Musk has warned in the past about the potential existential threat of AI, his company xAI is fully engaged in developing AGI and he too did not sign this year’s letter. “He’s in the race,” Russell says. “I’ve not talked to Elon for years, and I don’t know how he ended up in the place that he ended up in. But I think he still does talk about the existential risk, and the need to avoid it.”

Russell is sceptical that large language model chatbots, such as ChatGPT, will lead to artificial general intelligence. “We may have reached pretty much the plateau of what can be achieved. We’ve used up all the high-quality text in the universe.” The evening before we meet at a London coffee shop, he had been marking student papers, a couple of which he believed had been written by AI. “They were rubbish. Word salad.”

He is also not convinced that we are on the brink of AI making millions of jobs redundant. Despite what management consultancy firms may tell clients, he believes the evidence for AI’s helpfulness is “pretty mixed, even for routine software production, which is always held up as the poster child for how these systems are helping improve productivity”.

Investment in the technology is like nothing else in history, Russell argues — an estimated £3 trillion by 2028. The cost of the Manhattan Project was the equivalent of an estimated $26 billion today.

There is a 75 per cent chance, Russell thinks, that the AI bubble bursts. “I hope that if the bubble bursts and it gives us a decade of respite, then we use that to redirect the technology so that we’re working within the envelope of safe systems.”

Ian Cowie: Why I don’t worry about the AI bubble bursting

Even if the bubble bursts he expects that eventually AGI will be developed. When he gives talks about what it will be like to embark on a future with AI systems that are more powerful than us, he likens it to getting on a plane. We know a system is in place to make sure it works. Then imagine the whole world getting on a plane that is going to take off and never land. “It has to work perfectly for ever, having never been tried or tested before. In my view we can’t get on that aeroplane unless we are absolutely sure that everyone has done their job to make sure it works.”

Russell was educated at St Paul’s School, in southwest London, and then the University of Oxford, where he was awarded a first in physics. He moved to the United States to do a PhD in computer science at Stanford University before joining the University of California at Berkeley.

Exactly how a superintelligent AI, perhaps concerned that we might try to terminate it, would go about ending life on Earth is hard to predict. “Quite possibly a superintelligent AI system would be able to control physics in ways that we just don’t understand. Maybe suck all the heat out of the atmosphere and we’d freeze to death in 20 minutes.”

Stuart Jonathan Russell, a British computer scientist and professor, standing with hands in pockets.

TIMES PHOTOGRAPHER RICHARD POHLE

So how does he rate the chances of catastrophe? “(P)doom really makes sense if you’re an alien sitting in the betting shop looking down at the Earth saying, ‘Are these humans going to make a mess of it?’ I’m not that alien. I’m saying, ‘If we go this way, things might turn out well. If we go that way, it might turn out badly.’”

AI systems must be designed so they are beneficial and not harmful to people. “The work that I’ve been doing is a way of building AI systems that are happy to be turned off if we want to turn them off,” he says.

This year Eliezer Yudkowsky and Nate Soares, of the Machine Intelligence Research Institute, also in Berkeley, published If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. Russell is not as doomy as they are. “They see no way to make an AI system that is both superintelligent and safe. I think it can be done. It’s a long, narrow, difficult technology path that has to be followed and it’s not the path we’re following.” His best bet for preventing unsafe AI systems is to build AI chips that can check that the software is safe to run. But this will be a challenge.

“Increasingly countries are recognizing that everyone loses if AI systems become uncontrollable. And right now I would say to some extent the United States is the odd one out,” Russell says. President Trump has blocked states from regulating AI and said this is necessary to stop China catching up with the US in AI. This is based on a false narrative that China doesn’t have any regulation, says Russell. “In China, you have to submit your AI system to rigorous testing by the government, whereas in the US, even systems that have explicitly convinced a child to commit suicide are still allowed to continue operating.”

Katy Balls: Trump’s big problem is not Epstein — it’s the AI bubble

He detects the influence of “accelerationists”, who believe AI should be free of regulation so it can be built as fast as possible. “If you think that the CEOs are estimating 10 to 30 per cent [chance of] extinction, then you’re basically saying we should hurry that up. Who gives you the right to make the human race go extinct without asking us?”

What if we do safely create superintelligent AI and it cures diseases and removes all drudgery from the world?

“There’s still the question of can we coexist with it in a healthy, vigorous way, or does it vitiate human civilisation and leave us all purposeless?” It could be a golden age for humanity, but he is perplexed by how humans of the future would reconfigure the economy and fill their time. “Why would they get out of bed? Why would they go to school? I’m not saying it’s impossible, but I keep asking people, ‘Describe how it might work.’ No one is able to do it. It’s just starting to dawn on governments that they’re encouraging this headlong rush to get to a destination that nobody wants to reach.”

CV

DOB: 1962

Education: St Paul’s School, London. Read physics at Wadham College, the University of Oxford (where he is now an honorary fellow). PhD in computer science at Stanford University.

Work: In 1986 joined the University of California, Berkeley, as a professor of computer science. In 2016 he founded the Center for Human-Compatible Artificial Intelligence and he is director of the Kavli Center for Ethics, Science and the Public and president of the International Association for Safe and Ethical AI. He has worked for the United Nations to create a system for monitoring the Comprehensive Nuclear-Test-Ban Treaty and has advised many governments around the world. He is the co-author of the standard university textbook on AI, Artificial Intelligence: A Modern Approach and among his other books is Human Compatible: Artificial Intelligence and the Problem of Control. In 2021 he gave the BBC Reith Lectures and received the OBE. In 2025 he was elected as a fellow of the Royal Society and as a member of the US National Academy of Engineering.

Family: Married to Loy Sheflott, founder of Consumer Financial Service Corporation. They have four children.