{"id":206109,"date":"2025-12-27T04:54:10","date_gmt":"2025-12-27T04:54:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/206109\/"},"modified":"2025-12-27T04:54:10","modified_gmt":"2025-12-27T04:54:10","slug":"we-can-coexist-with-superintelligent-ai-but-who-wants-to","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/206109\/","title":{"rendered":"We can coexist with superintelligent AI but who wants to?"},"content":{"rendered":"<p>Stuart Russell, the British expert on artificial intelligence who has long warned about the dangers of failing to make the technology safe, says even the boss of one of the world\u2019s major AI companies told him he is frightened of the consequences of a machine running amok. He cannot slow down development of the tech, however, because then his company might be overtaken by its rivals. <\/p>\n<p>\u201cI talked to one of the CEOs, I won\u2019t say which one, but their view is, \u2018It\u2019s an arms race. Any one of us can\u2019t pull out. Only the government can put a stop to this arms race by insisting on effective regulation.\u2019 But he doesn\u2019t think that\u2019s going to happen unless there\u2019s a Chernobyl-scale disaster,\u201d Russell says.<\/p>\n<p>Such a disaster could come with the creation of artificial general intelligence (AGI) that matches and then potentially exceeds the human mind\u2019s full capabilities \u2014 a development Russell views as an existential threat to humankind.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u2022 <a href=\"https:\/\/www.thetimes.com\/business\/technology\/article\/fact-check-ai-artificial-intelligence-silicon-valley-s5sxfmdgw\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">\u2018I hired a million of the world\u2019s smartest people to fact-check AI\u2019<\/a><\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Possible scenarios Russell sketches out include a co-ordinated trading attack on financial markets that causes a global recession, cyberattacks that bring down global communication systems, war or civil conflict triggered by the influencing of human opinions, and a small engineered pandemic.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u201cThese could be initiated by humans using <a href=\"https:\/\/www.thetimes.com\/topic\/artificial-intelligence\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">AI<\/a> as a tool, or by AI systems as a form of retaliatory warning to humanity if we try to shut them down,\u201d he says. \u201cEach of these scenarios could result in thousands or millions of deaths, either directly or indirectly (through economic collapse) and cost anywhere from several hundred billion dollars to trillions of dollars.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">The view of the AI boss, he says, is that something like this is \u201cthe best we can hope for\u201d. <\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u201cNot that it would be pleasant, but that\u2019s the only way we\u2019re going to get the regulation,\u201d Russell adds. \u201cAnd without the regulation, we\u2019re heading towards a much bigger disaster.\u201d That disaster would be the end of humanity. <\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">The chief executive is very concerned about a Chernobyl-level event. \u201cBut if they try to pull out of the race or slow down, they\u2019ll just get replaced. Because the investors want to win.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Russell, 63, is one of the world\u2019s leading authorities on AI. A professor of computer science at the University of California at Berkeley, where he founded the Center for Human-Compatible Artificial Intelligence, he is also a fellow of Wadham College, Oxford. He has advised the United Nations and many governments and is the co-author of the standard university textbook on AI.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">The creation of superintelligent AI, which exceeds our own intelligence, \u201cwould be the biggest event in human history\u201d, he once said, \u201cand perhaps the last event in human history\u201d. He is president of the International Association for Safe and Ethical AI, which will hold its second annual meeting in Paris in February.<\/p>\n<p><img decoding=\"async\" alt=\"Stuart Jonathan Russell, a British computer scientist and professor, standing with hands in pockets.\" loading=\"lazy\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/\/81836e7a-04fe-4b81-bd56-3fc8ab70d38c.jpg\" class=\"responsive-sc-1nnon4d-0 bAbKns\"\/><\/p>\n<p>TIMES PHOTOGRAPHER RICHARD POHLE<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Four years ago I asked Russell how worried he was about the arrival of artificial intelligence that posed an existential threat. It was not a \u201cvisceral fear\u201d, he said, comparing his concern to how he regarded the advance of climate change. And now? \u201cIt feels quite a lot closer.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">A great deal has happened in those years, notably the release in 2023 of GPT-4, which experts claimed showed \u201csparks of artificial general intelligence\u201d.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Sam Altman, the chief executive of <a href=\"https:\/\/www.thetimes.com\/business\/companies-markets\/article\/amazon-invest-openai-chatgpt-djnlsst6t\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a>, the developer of <a href=\"https:\/\/www.thetimes.com\/topic\/chatgpt\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a>, has said that AI is a threat to human civilisation. Dario Amodei, chief executive of Anthropic, the company that makes the Claude AI model, was asked what was his P(doom) number, the probability that AI would cause catastrophic harm to humanity. He said 25 per cent. The Google chief executive, Sundar Pichai, said 10 per cent. Elon Musk put his at 20 per cent last year.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u201cIf we think an acceptable chance of a nuclear meltdown is one in ten million per year, then an acceptable chance of extinction has got to be one in 100 million [to] one in a billion. So our AI systems are 100,000 to a million times too dangerous to allow,\u201d Russell says.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">In 2023 Altman, Amodei and many other AI leaders signed a letter which said that mitigation of the risk of extinction from AI should be a global priority. <\/p>\n<p><img decoding=\"async\" alt=\"Sam Altman, creator of Open AI, speaking during a technology podcast recording.\" loading=\"lazy\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/\/8fe17e3a-cea8-47fc-9ca8-14f97665a89b.jpg\" class=\"responsive-sc-1nnon4d-0 bAbKns\"\/><\/p>\n<p>Sam Altman,<\/p>\n<p>TIMES PHOTOGRAPHER RICHARD POHLE<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">However, Altman and Amodei did not join 800 other signatories, including Russell, in a letter in October this year calling for a ban on the development of superintelligent AI until it could be realised safely. \u201cThe investors are not going to tolerate anyone who has second thoughts about this,\u201d Russell says.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u2022 <a href=\"https:\/\/www.thetimes.com\/business\/technology\/article\/san-francisco-ai-boom-gilded-age-sk73mjh8d\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">From urban decay to fabulous wealth, how AI revived San Francisco<\/a><\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">The billionaire <a href=\"https:\/\/www.thetimes.com\/topic\/elon-musk\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">Musk<\/a> thinks Russell is \u201cgreat\u201d and posted on X to recommend Russell\u2019s 2019 book Human Compatible, about the problem of controlling AI. Although Musk has warned in the past about the potential existential threat of AI, his company xAI is fully engaged in developing AGI and he too did not sign this year\u2019s letter. \u201cHe\u2019s in the race,\u201d Russell says. \u201cI\u2019ve not talked to Elon for years, and I don\u2019t know how he ended up in the place that he ended up in. But I think he still does talk about the existential risk, and the need to avoid it.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Russell is sceptical that large language model chatbots, such as ChatGPT, will lead to artificial general intelligence. \u201cWe may have reached pretty much the plateau of what can be achieved. We\u2019ve used up all the high-quality text in the universe.\u201d The evening before we meet at a London coffee shop, he had been marking student papers, a couple of which he believed had been written by AI. \u201cThey were rubbish. Word salad.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">He is also not convinced that we are on the brink of AI making millions of jobs redundant. Despite what management consultancy firms may tell clients, he believes the evidence for AI\u2019s helpfulness is \u201cpretty mixed, even for routine software production, which is always held up as the poster child for how these systems are helping improve productivity\u201d.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Investment in the technology is like nothing else in history, Russell argues \u2014 an estimated \u00a33 trillion by 2028. The cost of the Manhattan Project was the equivalent of an estimated $26 billion today.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">There is a 75 per cent chance, Russell thinks, that the <a href=\"https:\/\/www.thetimes.com\/business\/technology\/article\/ai-bubble-debt-valuations-financing-jlw6pjthh\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">AI bubble<\/a> bursts. \u201cI hope that if the bubble bursts and it gives us a decade of respite, then we use that to redirect the technology so that we\u2019re working within the envelope of safe systems.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u2022 <a href=\"https:\/\/www.thetimes.com\/money\/saving-investing\/article\/artificial-intelligence-bubble-bursting-investments-financial-advice-make-money-fjs0p26ql\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">Ian Cowie: Why I don\u2019t worry about the AI bubble bursting<\/a><\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Even if the bubble bursts he expects that eventually AGI will be developed. When he gives talks about what it will be like to embark on a future with AI systems that are more powerful than us, he likens it to getting on a plane. We know a system is in place to make sure it works. Then imagine the whole world getting on a plane that is going to take off and never land. \u201cIt has to work perfectly for ever, having never been tried or tested before. In my view we can\u2019t get on that aeroplane unless we are absolutely sure that everyone has done their job to make sure it works.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Russell was educated at St Paul\u2019s School, in southwest London, and then the University of Oxford, where he was awarded a first in physics. He moved to the United States to do a PhD in computer science at Stanford University before joining the University of California at Berkeley.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Exactly how a superintelligent AI, perhaps concerned that we might try to terminate it, would go about ending life on Earth is hard to predict. \u201cQuite possibly a superintelligent AI system would be able to control physics in ways that we just don\u2019t understand. Maybe suck all the heat out of the atmosphere and we\u2019d freeze to death in 20 minutes.\u201d<\/p>\n<p><img decoding=\"async\" alt=\"Stuart Jonathan Russell, a British computer scientist and professor, standing with hands in pockets.\" loading=\"lazy\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/1766811250_343_\/81836e7a-04fe-4b81-bd56-3fc8ab70d38c.jpg\" class=\"responsive-sc-1nnon4d-0 bAbKns\"\/><\/p>\n<p>TIMES PHOTOGRAPHER RICHARD POHLE<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">So how does he rate the chances of catastrophe? \u201c(P)doom really makes sense if you\u2019re an alien sitting in the betting shop looking down at the Earth saying, \u2018Are these humans going to make a mess of it?\u2019 I\u2019m not that alien. I\u2019m saying, \u2018If we go this way, things might turn out well. If we go that way, it might turn out badly.\u2019\u201d <\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">AI systems must be designed so they are beneficial and not harmful to people. \u201cThe work that I\u2019ve been doing is a way of building AI systems that are happy to be turned off if we want to turn them off,\u201d he says.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">This year Eliezer Yudkowsky and Nate Soares, of the Machine Intelligence Research Institute, also in Berkeley, published If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI. Russell is not as doomy as they are. \u201cThey see no way to make an AI system that is both superintelligent and safe. I think it can be done. It\u2019s a long, narrow, difficult technology path that has to be followed and it\u2019s not the path we\u2019re following.\u201d His best bet for preventing unsafe AI systems is to build AI chips that can check that the software is safe to run. But this will be a challenge. <\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u201cIncreasingly countries are recognizing that everyone loses if AI systems become uncontrollable. And right now I would say to some extent the United States is the odd one out,\u201d Russell says. President Trump has blocked states from regulating AI and said this is necessary to stop China catching up with the US in AI. This is based on a false narrative that China doesn\u2019t have any regulation, says Russell. \u201cIn China, you have to submit your AI system to rigorous testing by the government, whereas in the US, even systems that have explicitly convinced a child to commit suicide are still allowed to continue operating.\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u2022 <a href=\"https:\/\/www.thetimes.com\/us\/american-politics\/article\/trump-ai-bubble-us-economy-impact-gmzfn8fv8\" class=\"link__RespLink-sc-1ocvixa-0 csWvlP\" rel=\"nofollow noopener\" target=\"_blank\">Katy Balls: Trump\u2019s big problem is not Epstein \u2014 it\u2019s the AI bubble<\/a><\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">He detects the influence of \u201caccelerationists\u201d, who believe AI should be free of regulation so it can be built as fast as possible. \u201cIf you think that the CEOs are estimating 10 to 30 per cent [chance of] extinction, then you\u2019re basically saying we should hurry that up. Who gives you the right to make the human race go extinct without asking us?\u201d<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">What if we do safely create superintelligent AI and it cures diseases and removes all drudgery from the world?<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">\u201cThere\u2019s still the question of can we coexist with it in a healthy, vigorous way, or does it vitiate human civilisation and leave us all purposeless?\u201d It could be a golden age for humanity, but he is perplexed by how humans of the future would reconfigure the economy and fill their time. \u201cWhy would they get out of bed? Why would they go to school? I\u2019m not saying it\u2019s impossible, but I keep asking people, \u2018Describe how it might work.\u2019 No one is able to do it. It\u2019s just starting to dawn on governments that they\u2019re encouraging this headlong rush to get to a destination that nobody wants to reach.\u201d<\/p>\n<p>CV<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">DOB: 1962<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Education: St Paul\u2019s School, London. Read physics at Wadham College, the University of Oxford (where he is now an honorary fellow). PhD in computer science at Stanford University.<\/p>\n<p class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Work: In 1986 joined the University of California, Berkeley, as a professor of computer science. In 2016 he founded the Center for Human-Compatible Artificial Intelligence and he is director of the Kavli Center for Ethics, Science and the Public and president of the International Association for Safe and Ethical AI. He has worked for the United Nations to create a system for monitoring the Comprehensive Nuclear-Test-Ban Treaty and has advised many governments around the world. He is the co-author of the standard university textbook on AI, Artificial Intelligence: A Modern Approach and among his other books is Human Compatible: Artificial Intelligence and the Problem of Control. In 2021 he gave the BBC Reith Lectures and received the OBE. In 2025 he was elected as a fellow of the Royal Society and as a member of the US National Academy of Engineering.<\/p>\n<p id=\"last-paragraph\" class=\"responsive__Paragraph-sc-1pktst5-0 gaEeqC\">Family: Married to Loy Sheflott, founder of Consumer Financial Service Corporation. They have four children.<\/p>\n","protected":false},"excerpt":{"rendered":"Stuart Russell, the British expert on artificial intelligence who has long warned about the dangers of failing to&hellip;\n","protected":false},"author":2,"featured_media":206110,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-206109","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/206109","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=206109"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/206109\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/206110"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=206109"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=206109"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=206109"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}