{"id":346327,"date":"2025-12-13T23:32:08","date_gmt":"2025-12-13T23:32:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/346327\/"},"modified":"2025-12-13T23:32:08","modified_gmt":"2025-12-13T23:32:08","slug":"anthropics-chief-scientist-says-were-rapidly-approaching-the-moment-that-could-doom-us-all","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/346327\/","title":{"rendered":"Anthropic&#8217;s Chief Scientist Says We&#8217;re Rapidly Approaching the Moment That Could Doom Us All"},"content":{"rendered":"<p>\t<img decoding=\"async\" class=\"archive-post-thumb article-featured-image w-full h-auto mb-3\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/anthropic-ai-scientist-doom.jpg\"   fetchpriority=\"high\" width=\"2048\" height=\"1365\" alt=\"Anthropic's chief scientist Jared Kaplan says humanity will soon have a big decision to make on whether to take the &quot;ultimate risk&quot; on AI.\"\/><\/p>\n<p>\t\t\tIllustration by Tag Hartman-Simkins \/ Futurism. Source: Getty Images<\/p>\n<p class=\"pw-incontent-excluded article-paragraph skip\">Anthropic\u2019s chief scientist Jared Kaplan is making some grave predictions about humanity\u2019s future with AI.<\/p>\n<p class=\"article-paragraph skip\">The choice is ours, in his framing. For now, our fates are mostly in our hands, according to Kaplan \u2014 unless we decide to pass the proverbial baton to the machines, that is.<\/p>\n<p class=\"article-paragraph skip\">Such a point is fast approaching, he says in a <a href=\"https:\/\/www.theguardian.com\/technology\/ng-interactive\/2025\/dec\/02\/jared-kaplan-artificial-intelligence-train-itself\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">new interview with The Guardian<\/a>. By 2030, Kaplan predicts, or as soon as 2027, humanity will have to decide whether to take the \u201cultimate risk\u201d of letting AI models train themselves. The ensuing \u201cintelligence explosion\u201d could elevate the tech to new heights, birthing a so-called artificial general intelligence (AGI) which equals or surpasses human intellect and benefits humankind with all sorts of scientific and medical advancements. Or it could allow AI\u2019s power to snowball beyond our control, leaving us at the mercy of its whims.<\/p>\n<p class=\"article-paragraph skip\">\u201cIt sounds like a kind of scary process,\u201d he told the newspaper. \u201cYou don\u2019t know where you end up.\u201d<\/p>\n<p class=\"article-paragraph skip\">Kaplan is one of many prominent figures in AI warning about the field\u2019s potentially disastrous consequences. Geoffrey Hinton, one of the three so-called godfathers of AI, famously declared he regretted his life\u2019s work, and has frequently warned about how AI could <a href=\"https:\/\/futurism.com\/the-byte\/godfather-ai-risk-eliminate-humanity\" rel=\"nofollow noopener\" target=\"_blank\">upend or even destroy society<\/a>. OpenAI Sam Altman predicts that AI will <a href=\"https:\/\/futurism.com\/sam-altman-openai-wipe-out-categories-human-jobs\" rel=\"nofollow noopener\" target=\"_blank\">will wipe out entire categories of labor<\/a>. Kaplan\u2019s boss, CEO Dario Amodei, recently <a href=\"https:\/\/www.axios.com\/2025\/05\/28\/ai-jobs-white-collar-unemployment-anthropic\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">warned<\/a> AI could take over half of all entry-level white-collar jobs, and accused his competitors of \u201csugarcoating\u201d just how badly AI will disrupt society.<\/p>\n<p class=\"article-paragraph skip\">It sounds like Kaplan agrees with his boss\u2019s jobs assessment. AI will be able to do \u201cmost white-collar work\u201d in two to three years, he said in the interview. And while\u2019s he\u2019s optimistic we\u2019ll be able to keep AIs aligned to human interests, he\u2019s also worried about the prospect of allowing powerful AI to train other AIs, a \u201can extremely high-stakes decision\u201d we\u2019ll have to make in the near future.<\/p>\n<p class=\"article-paragraph skip\">\u201cThat\u2019s the thing that we view as maybe the biggest decision or scariest thing to do\u2026 once no one\u2019s involved in the process, you don\u2019t really know,\u201d he told The Guardian. \u201cOne is do you lose control over it? Do you even know what the AIs are doing?\u201d<\/p>\n<p class=\"article-paragraph skip\">To an extent, larger AI models are already used to train smaller AI models in a process called distillation, which allows the smaller AI to essentially catch up with its larger teacher. Kaplan, however, is worried about what\u2019s termed recursive self-improvement, in which the AIs learn without human intervention and make substantial leaps in their capabilities.<\/p>\n<p class=\"article-paragraph skip\">Whether we allow that to happen comes down to some heavy philosophical questions about the tech.<\/p>\n<p class=\"article-paragraph skip\">\u201cThe main question there is: are the AIs good for humanity?\u201d Kaplan said. \u201cAre they helpful? Are they going to be harmless? Do they understand people? Are they going to allow people to continue to have agency over their lives and over the world?\u201d<\/p>\n<p class=\"article-paragraph skip\">While AI\u2019s dangers are real, Kaplan\u2019s warnings warrant some careful unpacking. For one, they uphold the premise is that AI is already some of the most consequential and important tech ever made, regardless of whether existing AI systems represent the powerful autonomous machines warned of in so many a cautionary sci-fi tale \u2014 or are at least a meaningful stepping stone to getting there. The adage goes that there\u2019s no such thing as bad publicity, and you can add that doomsaying, especially in the AI industry, is its own form of hype. Visions of apocalypse distract from AI\u2019s more mundane consequences, like its <a href=\"https:\/\/www.unep.org\/news-and-stories\/story\/ai-has-environmental-problem-heres-what-world-can-do-about\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">staggering environmental toll<\/a>, its <a href=\"https:\/\/www.newscientist.com\/article\/2502650-ai-firms-began-to-feel-the-legal-wrath-of-copyright-holders-in-2025\/\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">flaunting of copyright laws<\/a>, and its <a href=\"https:\/\/futurism.com\/artificial-intelligence\/people-ai-chatbots-mental-distress\" rel=\"nofollow noopener\" target=\"_blank\">addictive, delusion-inducing cognitive effects<\/a>.<\/p>\n<p class=\"article-paragraph skip\">Moreover, many AI experts, including some of the field\u2019s foundational figures like Yann LeCun, don\u2019t believe that the LLM architecture that underpins AI chatbots are capable of blossoming into the all-powerful, intelligent systems that figures like Kaplan are so worried about. It\u2019s <a href=\"https:\/\/www.axios.com\/2025\/08\/21\/ai-wall-street-big-tech\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">not even clear if AI is actually increasing productivity<\/a> in the workplace, with <a href=\"https:\/\/futurism.com\/future-society\/ai-productivity-research\" rel=\"nofollow noopener\" target=\"_blank\">some research suggesting the opposite<\/a> \u2014 joining several notable attempts of bosses replacing their workers with AI agents but then <a href=\"https:\/\/futurism.com\/klarna-ai-automation-engineers\" rel=\"nofollow noopener\" target=\"_blank\">rehiring them once the tools fail<\/a>.<\/p>\n<p class=\"article-paragraph skip\">Kaplan conceded it\u2019s possible that AI\u2019s capabilities could stagnate. \u201cMaybe the best AI ever is the AI that we have right now,\u201d he mused. \u201cBut we really don\u2019t think that\u2019s the case. We think it\u2019s going to keep getting better.\u201d<\/p>\n<p class=\"article-paragraph skip\">More on AI: <a href=\"https:\/\/futurism.com\/artificial-intelligence\/google-pichai-ai-labor\" rel=\"nofollow noopener\" target=\"_blank\">Google CEO Says We\u2019re All Going to Have to Suffer Through It as AI Puts Society Through the Woodchipper<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Illustration by Tag Hartman-Simkins \/ Futurism. Source: Getty Images Anthropic\u2019s chief scientist Jared Kaplan is making some grave&hellip;\n","protected":false},"author":2,"featured_media":346328,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-346327","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/346327","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=346327"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/346327\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/346328"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=346327"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=346327"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=346327"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}