{"id":233170,"date":"2025-10-22T19:07:09","date_gmt":"2025-10-22T19:07:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/233170\/"},"modified":"2025-10-22T19:07:09","modified_gmt":"2025-10-22T19:07:09","slug":"ai-heavyweights-call-for-end-to-superintelligence-research","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/233170\/","title":{"rendered":"AI heavyweights call for end to \u2018superintelligence\u2019 research"},"content":{"rendered":"<p>I have worked in AI for more than three decades, including with pioneers such as <a href=\"http:\/\/jmc.stanford.edu\/\" rel=\"nofollow noopener\" target=\"_blank\">John McCarthy<\/a>, who <a href=\"http:\/\/www-formal.stanford.edu\/jmc\/history\/dartmouth\/dartmouth.html\" rel=\"nofollow noopener\" target=\"_blank\">coined the term<\/a> \u201cartificial intelligence\u201d in 1955. <\/p>\n<p>In the past few years, scientific breakthroughs have produced AI tools that promise  unprecedented advances in <a href=\"https:\/\/doi.org\/10.1038\/s41591-025-03983-2\" rel=\"nofollow noopener\" target=\"_blank\">medicine<\/a>, <a href=\"https:\/\/doi.org\/10.1038\/d41586-024-03214-7\" rel=\"nofollow noopener\" target=\"_blank\">science<\/a>, <a href=\"https:\/\/doi.org\/10.1038\/s41598-025-98385-2\" rel=\"nofollow noopener\" target=\"_blank\">business<\/a> and <a href=\"https:\/\/www.csiro.au\/en\/news\/all\/articles\/2024\/november\/ai-in-education\" rel=\"nofollow noopener\" target=\"_blank\">education<\/a>.<\/p>\n<p>At the same time, leading AI companies have the stated goal to create <a href=\"https:\/\/blog.samaltman.com\/the-gentle-singularity\" rel=\"nofollow noopener\" target=\"_blank\">superintelligence<\/a>: not merely smarter tools, but AI systems that <a href=\"https:\/\/theconversation.com\/what-is-ai-superintelligence-could-it-destroy-humanity-and-is-it-really-almost-here-240682\" rel=\"nofollow noopener\" target=\"_blank\">significantly outperform all humans<\/a> on essentially all cognitive tasks.  <\/p>\n<p>Superintelligence isn\u2019t just hype. It\u2019s a strategic goal determined by a privileged few, and backed by <a href=\"https:\/\/www.cnbc.com\/2025\/10\/21\/are-we-in-an-ai-bubble.html\" rel=\"nofollow noopener\" target=\"_blank\">hundreds of billions of dollars in investment<\/a>, business incentives, frontier AI technology, and some of the world\u2019s best researchers.<\/p>\n<p>What was once science fiction has become a concrete engineering goal for the coming decade. In response, I and hundreds of other scientists, global leaders and public figures have put our names to a <a href=\"https:\/\/superintelligence-statement.org\/\" rel=\"nofollow noopener\" target=\"_blank\">public statement<\/a> calling for superintelligence research to stop.<\/p>\n<p>What the statement says<\/p>\n<p>The new statement, released today by the AI safety nonprofit <a href=\"https:\/\/futureoflife.org\/\" rel=\"nofollow noopener\" target=\"_blank\">Future of Life Institute<\/a>, is not a call for a temporary pause, <a href=\"https:\/\/futureoflife.org\/open-letter\/pause-giant-ai-experiments\/\" rel=\"nofollow noopener\" target=\"_blank\">as we saw in 2023<\/a>. It is a short, unequivocal call for a global ban:<\/p>\n<p>We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.<\/p>\n<p>The list of signatories represents a remarkably broad coalition, bridging divides that few other issues can. The \u201cgodfathers\u201d of modern AI are present, such as <a href=\"https:\/\/mila.quebec\/en\/directory\/yoshua-bengio\" rel=\"nofollow noopener\" target=\"_blank\">Yoshua Bengio<\/a> and <a href=\"https:\/\/www.cs.toronto.edu\/%7Ehinton\/\" rel=\"nofollow noopener\" target=\"_blank\">Geoff Hinton<\/a>. So are leading safety researchers such as UC Berkeley\u2019s <a href=\"https:\/\/vcresearch.berkeley.edu\/faculty\/stuart-russell\" rel=\"nofollow noopener\" target=\"_blank\">Stuart Russell<\/a>.<\/p>\n<p>But the concern has broken free of academic circles. The list includes tech and business leaders such as Apple cofounder Steve Wozniak and Virgin\u2019s Richard Branson. It includes high-level political and military figures from both sides of US politics, such as former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen. It also includes prominent media figures such as Glenn Beck and former Trump strategist Steve Bannon, together with artists such as Will.I.am and respected historians such as Yuval Noah Harari.<\/p>\n<p>Why superintelligence poses a unique challenge<\/p>\n<p>Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology.<\/p>\n<p>Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control. <\/p>\n<p>The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs. <\/p>\n<p>Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that\u2019s producing greenhouse gases. <\/p>\n<p>Instruct it to maximise human happiness, and it might find a way to trap every human brain in a perpetual dopamine loop. Or, in Swedish philosopher Nick Bostrom\u2019s <a href=\"https:\/\/nickbostrom.com\/ethics\/ai\" rel=\"nofollow noopener\" target=\"_blank\">famous example<\/a>, a superintelligence tasked with producing as many paperclips as possible might try to convert all of Earth\u2019s matter, including us, into raw material for its factories. <\/p>\n<p>The issue is not malice but mismatch: a system that understands its instructions too literally, with the power to act cleverly and swiftly.<\/p>\n<p>History shows what can go wrong when our systems grow beyond our capacity to predict, contain or control them. <\/p>\n<p>The 2008 financial crisis began with <a href=\"https:\/\/www.rba.gov.au\/education\/resources\/explainers\/the-global-financial-crisis.html\" rel=\"nofollow noopener\" target=\"_blank\">financial instruments<\/a> so intricate that even their creators could not foresee how they would interact until the entire system collapsed. Cane toads <a href=\"https:\/\/www.nma.gov.au\/defining-moments\/resources\/introduction-of-cane-toads\" rel=\"nofollow noopener\" target=\"_blank\">introduced in Australia to fight pests<\/a> have instead devastated native species. The COVID pandemic exposed how <a href=\"https:\/\/doi.org\/10.1136\/bmjgh-2020-004537\" rel=\"nofollow noopener\" target=\"_blank\">global travel networks<\/a> can turn local outbreaks into worldwide crises.<\/p>\n<p>Now we stand on the verge of creating something far more complex: a mind that can rewrite its own code, redesign and achieve its goals, and out-think every human combined.<\/p>\n<p>A history of inadequate governance<\/p>\n<p>For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs. <\/p>\n<p>These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence. <\/p>\n<p>The new statement on superintelligence aims to start a global conversation not just on specific AI tools, but on the very destination AI developers are steering us toward.<\/p>\n<p>The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being.<\/p>\n<p>We can have a future of AI-powered medical breakthroughs, scientific discovery, and personalised education. None of these require us to build an uncontrollable superintelligence that could unilaterally decide the fate of humanity.<\/p>\n","protected":false},"excerpt":{"rendered":"I have worked in AI for more than three decades, including with pioneers such as John McCarthy, who&hellip;\n","protected":false},"author":2,"featured_media":233171,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-233170","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/233170","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=233170"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/233170\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/233171"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=233170"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=233170"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=233170"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}