{"id":29106,"date":"2025-09-18T15:19:07","date_gmt":"2025-09-18T15:19:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/29106\/"},"modified":"2025-09-18T15:19:07","modified_gmt":"2025-09-18T15:19:07","slug":"ai-models-must-adapt-or-die","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/29106\/","title":{"rendered":"AI models must adapt or die"},"content":{"rendered":"<p>Stay informed with free updates<\/p>\n<p class=\"article__content-sign-up-topic-description o3-type-body-base\">Simply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to your inbox.<\/p>\n<p>The nosebleed valuations in the US tech sector partly reflect the belief that artificial general intelligence is within sight. Even though few agree on what AGI means exactly, investors seem convinced that a stronger form of generalisable AI will transform economic productivity and make mountainous fortunes for its creators.\u00a0<\/p>\n<p>To sustain that story, US tech firms have been pouring hundreds of billions of dollars into building more AI infrastructure to scale their computing power. The trouble is that scaling is now producing diminishing returns and some researchers doubt whether the AI industry\u2019s route map will ever lead to fully generalisable intelligence. <a href=\"https:\/\/www.nytimes.com\/2025\/09\/03\/opinion\/ai-gpt5-rethinking.html\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">Arch-sceptic Gary Marcus <\/a>wrote recently that generative AI models were still best viewed as \u201csouped-up regurgitation machines\u201d that struggled with truth, hallucinations and reasoning and would never bring us to the \u201choly grail of AGI\u201d.<\/p>\n<p>The debate about the limits of scaling has been raging for years and, up until now, the doubters have been proved wrong. In 2019 the computer scientist Rich Sutton wrote <a href=\"http:\/\/www.incompleteideas.net\/IncIdeas\/BitterLesson.html\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">The Bitter Lesson<\/a>, arguing that the best way to solve AI problems was to keep throwing more data and computing power at them. The bitter lesson was that human ingenuity was overrated and constantly outstripped by the power of scaling.\u00a0<\/p>\n<p>\u201cThe biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin,\u201d he wrote. For the biggest AI companies, which have all pursued such deep learning approaches, that lesson still underpins their vast investments.<\/p>\n<p>But the doubters have kept doubting and they were given extra ammunition by OpenAI\u2019s launch of GPT-5, which led some to argue that generative AI is finally <a href=\"https:\/\/www.ft.com\/content\/d01290c9-cc92-4c1f-bd70-ac332cd40f94\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">hitting a wall<\/a>. Interest is now picking up again in alternative approaches to advance AI, including some of the so-called \u201cgood old fashioned AI\u201d techniques, long scorned by deep learning researchers.<\/p>\n<p>One expert proposing a different path is Karl Friston, a professor of neuroscience at University College London and chief scientist at the Canadian cognitive computing company Verses, who thinks we still have much to learn from biological intelligence. He acknowledges that the latest generative AI models are \u201cabsolutely astounding\u201d. But he argues they will never achieve AGI as they \u201chave no agency under the hood\u201d.\u00a0<\/p>\n<p>That can only be achieved through \u201cactive inference\u201d, which means having a theory of the world and the ability to predict and adapt. \u201cFor true AGI, you have to be active and embodied and situated physically in a world that you can act upon before you can even think about applying the notion of intelligence,\u201d he tells me.<\/p>\n<p>As an example, generative AI models might be good at analysing historic patterns of traffic flows in a city. But what happens when Taylor Swift comes to town? A truly intelligent system has to have a constantly evolving model of how the world works and predict the likely effect of novel factors.\u00a0<\/p>\n<p>\u201cAI breaks when it hits the real world because the real world keeps changing,\u201d says Gabriel Ren\u00e9, Verses\u2019 chief executive. His company\u2019s vision is to help build a network of billions of small, adaptive, domain-specific agents that do one thing well rather than rely on a massive system that tries to do billions of things well. \u201cIntelligence is about adaptation. It\u2019s not about compressing historical knowledge and memory.\u201d<\/p>\n<p>Intriguingly, even Sutton appears to be warming to this way of thinking. In a recent paper called <a href=\"https:\/\/theaiinnovator.com\/welcome-to-the-era-of-experience\/\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">Welcome to the Era of Experience, <\/a>Sutton and fellow researcher David Silver argue that almost all the high-quality data that could improve an AI agent\u2019s performance has been consumed. That means AI agents will now have to generate fresh inputs from interacting with the real world to gain experiences and data points. \u201cAI is at the cusp of a new period in which experience will become the dominant medium of improvement and ultimately dwarf the scale of human data used in today\u2019s systems,\u201d they write. \u201cThe pace of progress driven solely by supervised learning from human data is demonstrably slowing, signally the need for a new approach.\u201d<\/p>\n<p>Investors may be reassured that there remain millions of lucrative use cases for existing generative AI models. And the big AI companies can, of course, pivot and increasingly pursue hybrid approaches to build more efficient models in future. Still, investors had better hope that the AI companies, as well as their agents, learn fast from experience and adapt to a changing world.<\/p>\n<p><a href=\"https:\/\/www.ft.com\/content\/mailto:john.thornhill@ft.com\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">john.thornhill@ft.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Stay informed with free updates Simply sign up to the Artificial intelligence myFT Digest &#8212; delivered directly to&hellip;\n","protected":false},"author":2,"featured_media":29107,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[241,61,60,80],"class_list":{"0":"post-29106","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-computing","9":"tag-ie","10":"tag-ireland","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/29106","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=29106"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/29106\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/29107"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=29106"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=29106"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=29106"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}