{"id":172140,"date":"2025-12-07T10:24:17","date_gmt":"2025-12-07T10:24:17","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/172140\/"},"modified":"2025-12-07T10:24:17","modified_gmt":"2025-12-07T10:24:17","slug":"how-close-are-todays-ai-models-to-agi-and-to-self-improving-into-superintelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/172140\/","title":{"rendered":"How Close Are Today\u2019s AI Models to AGI\u2014And to Self-Improving into Superintelligence?"},"content":{"rendered":"<p class=\"article_pub_date-zPFpJ\">December 6, 2025<\/p>\n<p class=\"article_read_time-ZYXEi\">5 min read<\/p>\n<p><a href=\"https:\/\/www.google.com\/preferences\/source?q=scientificamerican.com\" target=\"_blank\" class=\"google_cta-CuF5m\" rel=\"nofollow noopener\"><img decoding=\"async\" src=\"data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAADgAAAA4CAYAAACohjseAAAACXBIWXMAACxLAAAsSwGlPZapAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAZpSURBVHgB1ZrfbxRVFMfPvTPTnwiLgVISErdqYuKPsJKWYPzRacHEBxKWgg+EmHZ91IcWnzSEdBtDfIT+Bd0+YK0\/wvJmVLbTaAzSGrYvBsNDV40UKdKpBftj597ruUN3Xdrd7vzYbnc\/STMzO\/dO5zvn3HPPPTMENoEZ\/VBQoSs6J0qIAHmKgMCtCAgggUwbPDbx2MTdFAeY0gCmGYPxJmMiCSWEQImY1Vt1ppAwBTiGh0HwyKrwuAB+Zc\/Vn+PgE98C73a29eBVunFXh9KTokCiaW1lfO\/XyRR4wLPAVWH94MNaLkgRQWK7E9cHwCWuBc68FQoqaW0INsdixUgxLd3hxpoUXPD34bZeNa3egK0RJ5EPd3q282C\/0w6OLDithwKNinYBd3ugQpCB6F+2EmkxkuZG7Ypa0BanamNQQeIkGG3D9bS2p1g7daOTq+NtDB9XECoMIchAc+Kni8XaFbSgtJxiaZcBKlPcnsT1qJO2BQVuU9QhtFwIKgw34iR5XXT2SGs\/XigM\/u\/GBEJkNpLEzOQ3zjQ7DVOABZgqAoog+wXAy\/gg2wkp7iluxUnWRdHVcTcNvhAGIzCw97tJw2mPu3pbSCjQhzfUnfeKHsRJ1gm8e7hNiguCB4SAJKfijBtha7ETdcXCRILo\/18YIk2JiRh44DGBq+nXEHgAVwSDzVcn+qBE\/HWkNYrpWb8fcZLHBN5\/5\/lp63ZjENzi8yYKIa2517iWAh9kBa58o\/UQCkNLPzTD4vfNji+wWeJKBc3Zs\/O7utfuQOPR34HuWCnaWQ78ShYnsS2Y\/lbVMZyP5Z7g8zWwcOlZe1uAWNPViQhUOLYF0RI9606gBXe89wvUts2u64TRMsWY4nptthXYAnHsHSvUoOHIn7bLkjqW8ysZ9jv4ywW13RMgsFGjmpfuw\/Z3f82OS85pDKoElQEJKQ4aSnFS5NLk7tj26B8p8IgencOHWReAsrBkqgoR7U4rF9JNG9647avSRTUFc1zmKZlwi+DqII5B4uppKoSNQ7VAIUgFcZF3Yq5JOsCEKgGLzvspcbNaJ9UjbpWAq6oaF2Qeqgt3AjEgzUGV4UqgqMD6TDFkkEk5bUyqUSAGGceBQ1pQjEGZJumSkKKcw5SbHkwo7VAloPFSKrqorHR1O+mwwFVILO3DTOTWFfAIPlATK2gGeAT76k7bciLmSb61YD5mWD28P\/8qzPAG01pkLclIvOxzov7xgk5p8XvNwsUA1d60DNzd8GZHF5+BblOX4uRhQK2jJSsuuQGDnKtaLRbCDHuaQLfJ63LSJS8+fBH\/XoAFoeX8J9IbGgqXPdgQSo656sCspC0QQ2ls7bmMS44uPp2va0BtUMqyIsjQef6fHnAxTWGAMYzoTtMWuNZNx5ebbZe8xXYUvoKAcOtnXWVxVf2TuSCawfFLTwmjEJPbbCbDhRiUW+mSHy4cfNwlC0IuHBw94SgC+4FyD2+5VtL2si4rsEa1LvbOv2IWcMmCcAGxzRTZef6BHApu33LF0D1TcicrUK7zri03eaqUSZGto12uXKgY0i07zj\/w9GaZczGc2V9Xq2gd6bqBUdLTe0Ehv4IQrCN5Kp4CH7R9fqJXM\/Vo3WzES6SOJc5uy9Zr8wg8iRO\/cD6Z5gGFxohgw5On4obTPnLaURqVbpy8+jJJPbV2QcPtj7D0t8vpZYCn0y0Z95TkrTYd+PRkH6XiAvhEWhQ3Bg6EJGFsiihgKuxRtGYKBBijISJEiFAqX4SGSJ7ypRRZez8M2sLrxf8hZi6Jc09Ec38qWE5rHTlxGc\/6f8tbIqTI2rnjBc\/Lhzl2dlvL2t8LLnitJRbBGn1Jv\/zzw\/KTcXiw7xxw7d66c1KcSKc78vXbsCAaGgkHFaKMVdJC13bZe6dBe3jAPhb214lWh3F2Z15jFK34SpEq0MteI+tmIV22Zi5sCi4ixrntBYvRRav2d766aTYdfW6UKLQe12KHoEKw6m+aaXWm48cPOo2N2rn62hCnkChOISWd0D2BscECftzJfOv6c8qtHpeCwyBbZlGnC27PH8QeGOnqIYT0l1GoYTF6Jnn6C1eR3bPADGUQaoAgA5OnvjTAA74FZmi79HaIU96HV2wvgVhDUIizh2zYb+2nZAJzsccpVULAuC7TMAwKAUFIMDcVE48W2CamalhlI0lup3NkylqykqUsaP0HRAp2kB9Rgc8AAAAASUVORK5CYII=\" alt=\"Google Logo\"\/> Add Us On GoogleAdd SciAm<\/a><\/p>\n<p>Are We Seeing the First Steps Toward AI Superintelligence?<\/p>\n<p>Today\u2019s leading AI models can already write and refine their own software. The question is whether that self-improvement can ever snowball into true superintelligence<\/p>\n<p class=\"article_authors-ZdsD4\">By <a class=\"article_authors__link--hwBj\" href=\"https:\/\/www.scientificamerican.com\/author\/deni-ellis-bechard\/\" rel=\"nofollow noopener\" target=\"_blank\">Deni Ellis B\u00e9chard<\/a> edited by <a class=\"article_authors__link--hwBj\" href=\"https:\/\/www.scientificamerican.com\/author\/eric-sullivan\/\" rel=\"nofollow noopener\" target=\"_blank\">Eric Sullivan<\/a><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2025\/12\/GettyImages-1291355222.jpg\" alt=\"Digital human face composed of glowing particles connects to futuristic microchip emitting bright data streams\"   class=\"lead_image__img-xKODG\" style=\"--w:3000;--h:2250\" fetchpriority=\"high\"\/> <\/p>\n<p>KTSDESIGN\/SCIENCE PHOTO LIBRARY<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\"><a href=\"https:\/\/www.scientificamerican.com\/article\/are-we-living-in-a-computer-simulation1\/\" rel=\"nofollow noopener\" target=\"_blank\">The Matrix<\/a>, <a href=\"https:\/\/www.scientificamerican.com\/article\/has-ai-already-brought-us-the-terminator-future\/\" rel=\"nofollow noopener\" target=\"_blank\">The Terminator<\/a>\u2014so much of our science fiction is built around the dangers of superintelligent <a href=\"https:\/\/www.scientificamerican.com\/artificial-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a>: a system that exceeds the best humans across nearly all cognitive domains. OpenAI CEO <a href=\"https:\/\/www.scientificamerican.com\/article\/what-does-artificial-general-intelligence-actually-mean\/\" rel=\"nofollow noopener\" target=\"_blank\">Sam Altman<\/a> and Meta CEO <a href=\"https:\/\/www.scientificamerican.com\/article\/what-are-ai-agents-and-why-are-they-about-to-be-everywhere\/\" rel=\"nofollow noopener\" target=\"_blank\">Mark Zuckerberg<\/a> have predicted we\u2019ll achieve such AI in the coming years. Yet machines like those depicted as battling humanity in those movies would have to be far more <a href=\"https:\/\/www.scientificamerican.com\/article\/how-does-chatgpt-think-psychology-and-neuroscience-crack-open-ai-large\/\" rel=\"nofollow noopener\" target=\"_blank\">advanced than ChatGPT<\/a>, not to mention more capable of making Excel spreadsheets than Microsoft Copilot. So how can anyone think we\u2019re remotely close to <a href=\"https:\/\/www.scientificamerican.com\/blog\/observations\/dont-panic-about-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">artificial superintelligence<\/a>?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">One answer goes back to 1965, when statistician Irving John Good introduced the idea of an \u201c<a href=\"https:\/\/www.sciencedirect.com\/science\/chapter\/bookseries\/pii\/S0065245808604180\" rel=\"nofollow noopener\" target=\"_blank\">ultraintelligent machine<\/a>.\u201d He wrote that once it became sufficiently sophisticated, a computer would rapidly improve itself. If this seems far-fetched, consider how AlphaGo Zero\u2014an AI system developed at DeepMind in 2017 to play the ancient Chinese board game Go\u2014was built. Using no data from human games, <a href=\"https:\/\/www.scientificamerican.com\/article\/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor\/\" rel=\"nofollow noopener\" target=\"_blank\">AlphaGo Zero<\/a> played itself millions of times, achieving in days an improvement that would have taken a human a lifetime and that allowed it to defeat the previous versions of AlphaGo that had already beaten the world\u2019s best human players. Good\u2019s idea was that any system that was sufficiently intelligent to rewrite itself would create iterations of itself, each one smarter than the previous and even more capable of improvement, triggering an \u201cintelligence explosion.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">The question, then, is how close we are to that first system capable of <a href=\"https:\/\/www.scientificamerican.com\/article\/ai-can-design-an-autonomous-robot-in-30-seconds\/\" rel=\"nofollow noopener\" target=\"_blank\">autonomous self-improvement<\/a>. Though the runaway systems Good described aren\u2019t here yet, self-improving computers are\u2014at least in narrow domains. AI is already running code on itself. OpenAI\u2019s Codex and Anthropic\u2019s Claude Code can work independently for an hour or more writing new code or updating existing code. Using Codex recently, I thumbed a prompt into my phone while on a walk, and it made a working website before I reached home. In the hands of skilled coders, such systems can do dramatically more, from reorganizing large code bases to sketching entirely new ways to build the software in the first place.<\/p>\n<p>On supporting science journalism<\/p>\n<p>If you&#8217;re enjoying this article, consider supporting our award-winning journalism by <a href=\"https:\/\/www.scientificamerican.com\/getsciam\/\" rel=\"nofollow noopener\" target=\"_blank\">subscribing<\/a>. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">So why hasn\u2019t a model powering ChatGPT quietly coded itself into ultraintelligence? The hitch is in the phrase above: \u201cin the hands of skilled coders.\u201d Despite AI\u2019s impressive improvements, our current systems still rely on humans to set goals, design experiments and decide which changes count as genuine progress. They\u2019re not yet capable of evolving independently in a robust way, which makes some talk about imminent superintelligence seem blown out of proportion\u2014unless, of course, current AI systems are closer than they appear to being able to self-improve in increasingly broad slices of their abilities.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">One area in which they already look superhuman is how much information they can absorb and manipulate. The most advanced models are trained on far more text than any human could read in a lifetime\u2014from poetry to history to the sciences. They can also keep track of far longer stretches of text while they work. Already, with commercially available systems such as ChatGPT and Gemini, I can upload a stack of books and have the AI synthesize and critique them in a way that would take a human weeks. That doesn\u2019t mean the result is always correct or insightful\u2014but it does mean that, in principle, a system like this could read its own documentation, logs, and code and propose changes at a speed and scale no engineering team could match.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Reasoning, however, is where these systems lag\u2014though that\u2019s no longer true in certain focused areas. DeepMind\u2019s AlphaDev and related systems have already found new, more efficient algorithms for tasks such as sorting, results that are now used in real-world code and that go beyond simple statistical mimicry. Other models excel at formal mathematics and graduate-level science questions that resist simple pattern-matching. We can debate the value of any particular benchmark\u2014and researchers are doing exactly that\u2014but there\u2019s no question that some AI systems have become capable of discovering solutions humans had not previously found.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">If the systems already have these abilities, what, then, is the missing piece? One answer is artificial general intelligence (AGI), the sort of dynamic, flexible reasoning that allows humans to learn from one field and apply it to others. <a href=\"https:\/\/www.scientificamerican.com\/article\/every-ai-breakthrough-shifts-the-goalposts-of-artificial-general\/?_gl=1*1ir7ev*_up*MQ..*_ga*ODAyMTM4NzgwLjE3NjQ5NzIwMjQ.*_ga_0P6ZGEWQVE*czE3NjQ5NzIwMjMkbzEkZzAkdDE3NjQ5NzIwODckajU4JGwwJGgw\" rel=\"nofollow noopener\" target=\"_blank\">As I\u2019ve previously written<\/a>, we keep <a href=\"https:\/\/www.scientificamerican.com\/article\/every-ai-breakthrough-shifts-the-goalposts-of-artificial-general\/\" rel=\"nofollow noopener\" target=\"_blank\">shifting our definitions<\/a> of AGI as machines master new skills. But for the superintelligence question, what matters is not the label we attach; it\u2019s whether a system can use its skills to reliably redesign and upgrade itself.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">And this brings us back to Good\u2019s \u201cintelligence explosion.\u201d If we do build systems with that kind of flexible, humanlike reasoning across many domains, what will separate it from superintelligence? Advanced models are already trained on more science and literature than any human, have far greater working memories and show extraordinary reasoning skills in limited domains. Once that missing piece of flexible reasoning is in place, and once we allow such systems to deploy those skills on their own code, data and training processes, could the leap to fully superhuman performance be shorter than we imagine?<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">Not everyone agrees. Some researchers believe we have yet to fundamentally understand intelligence and that this missing piece will take longer than expected to engineer. Others speak of AGI being achieved in a few years, leading to further advances far beyond human capacities. In 2024 Altman publicly suggested that superintelligence could arrive \u201c<a href=\"https:\/\/ia.samaltman.com\/\" rel=\"nofollow noopener\" target=\"_blank\">in a few thousand days<\/a>.\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">If this sounds too much like science fiction, consider that AI companies regularly run safety tests on their systems to make sure they can\u2019t go into a runaway self-improvement loop. <a href=\"https:\/\/evaluations.metr.org\/gpt-5-1-codex-max-report\/\" rel=\"nofollow noopener\" target=\"_blank\">METR<\/a>, an independent AI safety group, evaluates models according to how long they can reliably sustain a complex task before reaching failure. This past November, its tests of GPT-5.1-Codex-Max came in around two hours and 42 minutes. This is a huge leap from GPT-4\u2019s few minutes of such performance on the same metric, but it isn\u2019t the situation Good described.<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\"><a href=\"https:\/\/www.scientificamerican.com\/podcast\/episode\/anthropics-claude-4-chatbot-suggests-it-might-be-conscious\/\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic runs similar tests<\/a> on its AI systems. \u201cTo be clear, we are not yet at \u2018self-improving AI,\u2019\u201d wrote the company\u2019s co-founder and head of policy Jack Clark <a href=\"https:\/\/jack-clark.net\/2025\/10\/13\/import-ai-431-technological-optimism-and-appropriate-fear\/\" rel=\"nofollow noopener\" target=\"_blank\">in October<\/a>, \u201cbut we are at the stage of \u2018AI that improves bits of the next AI, with <a href=\"https:\/\/www.scientificamerican.com\/article\/can-a-generative-ai-agent-accurately-mimic-my-personality\/\" rel=\"nofollow noopener\" target=\"_blank\">increasing autonomy<\/a>.\u2019\u201d<\/p>\n<p class=\"\" data-block=\"sciam\/paragraph\">If AGI is achieved, and we add human-level judgment to an immense information base, vast working memory and extraordinary speed, Good\u2019s idea of rapid <a href=\"https:\/\/www.scientificamerican.com\/article\/our-evolutionary-past-can-teach-us-about-ais-future\/\" rel=\"nofollow noopener\" target=\"_blank\">self-improvement<\/a> starts to look less like science fiction. The real question is whether we\u2019ll stop at \u201cmere human\u201d\u2014or risk overshooting.<\/p>\n<p>It\u2019s Time to Stand Up for Science<\/p>\n<p class=\"subscriptionPleaText--StZo\">If you enjoyed this article, I\u2019d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.<\/p>\n<p class=\"subscriptionPleaText--StZo\">I\u2019ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.<\/p>\n<p class=\"subscriptionPleaText--StZo\">If you <a class=\"subscriptionPleaLink-FiqVM subscriptionPleaBoldFont-nQHHb\" href=\"https:\/\/www.scientificamerican.com\/getsciam\/\" rel=\"nofollow noopener\" target=\"_blank\">subscribe to Scientific American<\/a>, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.<\/p>\n<p class=\"subscriptionPleaText--StZo\">In return, you get essential news, <a class=\"subscriptionPleaLink-FiqVM subscriptionPleaBoldFont-nQHHb\" href=\"https:\/\/www.scientificamerican.com\/podcasts\/\" rel=\"nofollow noopener\" target=\"_blank\">captivating podcasts<\/a>, brilliant infographics, <a class=\"subscriptionPleaLink-FiqVM subscriptionPleaBoldFont-nQHHb\" href=\"https:\/\/www.scientificamerican.com\/newsletters\/\" rel=\"nofollow noopener\" target=\"_blank\">can&#8217;t-miss newsletters<\/a>, must-watch videos, <a class=\"subscriptionPleaLink-FiqVM subscriptionPleaBoldFont-nQHHb\" href=\"https:\/\/www.scientificamerican.com\/games\/\" rel=\"nofollow noopener\" target=\"_blank\">challenging games<\/a>, and the science world&#8217;s best writing and reporting. You can even <a class=\"subscriptionPleaLink-FiqVM subscriptionPleaBoldFont-nQHHb\" href=\"https:\/\/www.scientificamerican.com\/getsciam\/gift\/\" rel=\"nofollow noopener\" target=\"_blank\">gift someone a subscription<\/a>.<\/p>\n<p class=\"subscriptionPleaText--StZo\">There has never been a more important time for us to stand up and show why science matters. I hope you\u2019ll support us in that mission.<\/p>\n","protected":false},"excerpt":{"rendered":"December 6, 2025 5 min read Add Us On GoogleAdd SciAm Are We Seeing the First Steps Toward&hellip;\n","protected":false},"author":2,"featured_media":172141,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-172140","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/172140","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=172140"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/172140\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/172141"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=172140"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=172140"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=172140"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}