{"id":70394,"date":"2025-08-15T14:24:11","date_gmt":"2025-08-15T14:24:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/70394\/"},"modified":"2025-08-15T14:24:11","modified_gmt":"2025-08-15T14:24:11","slug":"openais-waterloo-with-corrections-marcus-on-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/70394\/","title":{"rendered":"OpenAI\u2019s Waterloo? [with corrections] &#8211; Marcus on AI"},"content":{"rendered":"<p>For the last several years, OpenAI has received more press than God. <\/p>\n<p>GPT-3, which most people hardly remember, \u201cwrote\u201d <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2020\/sep\/08\/robot-wrote-this-article-gpt-3\" rel=\"nofollow noopener\" target=\"_blank\">an oped in The Guardian<\/a> in September 2020 (with human assistance behind the scenes) and ever since media coverage of large language models, generally focusing on OpenAI, has been nonstop. A zillion stories, often fawning, have been written about Sam Altman the boy genius and how ChatGPT would usher in some kind of amazing new era in science and medicine (spoiler alert: it hasn\u2019t, at least not yet) , and how it would radically increase GDP and productivity, ushering in age of abundance (that hasn\u2019t happened yet, either). For a while (mercifully finally over) every journalist and their cousin seem to think that the cleverest thing in the world was to open or close their essay with a quote from ChatGPT. Some people used to it write their wedding vows.<\/p>\n<p>As anyone who reads this knows, I have never been quite so positive. Around the same time as the Guardian Oped, Ernest Davis I warned that GPT0-3 was \u201c<a href=\"https:\/\/www.technologyreview.com\/2020\/08\/22\/1007539\/gpt3-openai-language-generator-artificial-intelligence-ai-opinion\/\" rel=\"nofollow noopener\" target=\"_blank\">a fluent spouter of bullshit<\/a>.\u201d. In 2023, shortly after ChatGPT was launched, I doubled down, and told 60 Minutes that LLM output was \u201cauthoritative bullshit\u201d. (In order to accommodate the delicate ears of network television the expletive was partly bleeped out.( I railed endlessly here and elsewhere about hallucinatioms and warned people to keep their expectations about <a href=\"https:\/\/garymarcus.substack.com\/p\/what-to-expect-when-youre-expecting\" rel=\"nofollow noopener\" target=\"_blank\">GPT-4 <\/a> and <a href=\"https:\/\/garymarcus.substack.com\/p\/what-to-expect-when-youre-expecting-62e\" rel=\"nofollow noopener\" target=\"_blank\">GPT-5 <\/a>modest. From <a href=\"https:\/\/garymarcus.substack.com\/p\/the-new-science-of-alt-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">the first day of this newsletter<\/a>, in May 2022, my theme was that scaling alone would not get us to AGI.<\/p>\n<p>Without a doubt, that left me cast as hater (when in truth I love AI and want it to succeed) and a villain. I was, almost daily, mocked and ridiculed for my criticism of LLMs, including by some of the most powerful people in the tech, from Elon Musk to Yann LeCun (who to his credit, eventually saw for himself the limits of LLMs) to Altman himself, who recently called me a \u201ctroll\u201d on X (only to turn tail when I responded in detail).  <\/p>\n<p>I endlessly challenged these people to debate, to discuss the facts at hand. None of them accepted. Not once. Nobody ever wanted to talk science.  <\/p>\n<p>And they didn\u2019t need to, not with media folks that I won\u2019t name often acting like cheerleaders, and even sometimes personally taking a shots against me.  For a long time, the strategy of endless hype and only occasional engagement with science  worked like a charm, commercially if not scientifically. <\/p>\n<p>But media coverage is not science, and braggadacio alone cannot yield AGI. What is shocking is that suddenly, almost everywhere all at once, the veil has begun to lift.  Over the last week the  world woke up to the fact Altman  wildly overpromised on GPT-5, with me no longer cast as villain but as hero, which is probably the stuff of Sam\u2019s worst nighmares.<\/p>\n<p>To regular readers of this newsletter, the underwhelming delivery of GPT-5 in its itself should not have been surprising.  But what is startling (and frankly satisfying) is how rapidly and radically the narrative has changed.<\/p>\n<p>You can see that narrative flip all over the place. CNN, for example,<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!t-yM!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F785d2d1c-dcf1-4296-8148-122862030de4_1641x1754.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/785d2d1c-dcf1-4296-8148-122862030de4_1641.jpeg\" width=\"1456\" height=\"1556\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/785d2d1c-dcf1-4296-8148-122862030de4_1641x1754.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1556,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:279447,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/170969793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F785d2d1c-dcf1-4296-8148-122862030de4_1641x1754.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>Which read in part (highlighting add by the reader who passed the story to me):<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!R7xp!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92a670ad-2ea1-44ff-b1ec-01a6ace253c1_531x858.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/92a670ad-2ea1-44ff-b1ec-01a6ace253c1_531x.jpeg\" width=\"531\" height=\"858\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/92a670ad-2ea1-44ff-b1ec-01a6ace253c1_531x858.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:858,&quot;width&quot;:531,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:334465,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/170969793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92a670ad-2ea1-44ff-b1ec-01a6ace253c1_531x858.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\" title=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>Futurism had this to say:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!tqbI!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ad9ebcf-692f-41b8-ab54-37c7e90d0e9d_1712x1436.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/0ad9ebcf-692f-41b8-ab54-37c7e90d0e9d_1712.jpeg\" width=\"1456\" height=\"1221\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/0ad9ebcf-692f-41b8-ab54-37c7e90d0e9d_1712x1436.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1221,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1685772,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/170969793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0ad9ebcf-692f-41b8-ab54-37c7e90d0e9d_1712x1436.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>writing in part, name checking me and with a supportive quote from Edinburgh researcher:<\/p>\n<p>Though Marcus&#8217;s more realistic view of AI made him a pariah in the excitable AI community, he&#8217;s no longer standing alone against scalable AI. Yesterday, University of Edinburgh AI scholar <a href=\"https:\/\/www.miragenews.com\/gpt-5-has-ai-hit-plateau-1512925\/\" rel=\"nofollow noopener\" target=\"_blank\">Michael Rovatsos wrote<\/a> that &#8220;it is possible that the release of GPT-5 marks a shift in the evolution of AI which&#8230; might usher in the end of creating ever more complicated models whose thought processes are impossible for anyone to understand.&#8221;<\/p>\n<p>And my own essay on all this, <a href=\"https:\/\/garymarcus.substack.com\/p\/gpt-5-overdue-overhyped-and-underwhelming?r=8tdk6\" rel=\"nofollow noopener\" target=\"_blank\">GPT-5: Overdue, overhyped and underwhelming<\/a>,  went viral, with over 163,000 views.<\/p>\n<p>Meanwhile, to my immense satisfaction, The New Yorker  landed firmly on team Marcus, with computer scientist Cal Newport adding in car metaphor I wish I had coined myself:<\/p>\n<p>In the aftermath of GPT-5\u2019s launch, it has become more difficult to take bombastic predictions about A.I. at face value, and the views of critics like Marcus seem increasingly moderate\u2026 Post-training improvements don\u2019t seem to be strengthening models as thoroughly as scaling once did. A lot of utility can come from souping up your Camry, but no amount of tweaking will turn it into a Ferrari.\u201d<\/p>\n<p>Even better Newport wound up with this, echoing a passage in my notorious Deep Learning is Hitting a Wall, in which I warned that scaling was not a physical law:: <\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!dHa-!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab40fcf9-0d9c-4aff-b338-2178859eb68e_1179x1157.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/ab40fcf9-0d9c-4aff-b338-2178859eb68e_1179.jpeg\" width=\"1179\" height=\"1157\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/ab40fcf9-0d9c-4aff-b338-2178859eb68e_1179x1157.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1157,&quot;width&quot;:1179,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:494631,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/170969793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fab40fcf9-0d9c-4aff-b338-2178859eb68e_1179x1157.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>As if Sam\u2019s week couldn\u2019t get worse,  columnist Steve Rosenbush at <a href=\"https:\/\/www.wsj.com\/articles\/meet-neurosymbolic-ai-amazons-method-for-enhancing-neural-networks-620dd81a\" rel=\"nofollow noopener\" target=\"_blank\">The Wall Street Journal gave me further props<\/a>, on the topic of neurosymbolic AI, the alternative to LLMs that I long advocated, describing how Amazon was now putting neurosymbolic AI into practice and closing his essay with a pointer to my views and<a href=\"https:\/\/open.substack.com\/pub\/garymarcus\/p\/how-o3-and-grok-4-accidentally-vindicated\" rel=\"nofollow noopener\" target=\"_blank\"> a link to another of this newsletter\u2019s essays<\/a>.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!IatE!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa08fd849-4d0e-4cfa-9b80-c8c3cf1c0601_1441x570.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/a08fd849-4d0e-4cfa-9b80-c8c3cf1c0601_1441.jpeg\" width=\"1441\" height=\"570\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/a08fd849-4d0e-4cfa-9b80-c8c3cf1c0601_1441x570.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:570,&quot;width&quot;:1441,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:208356,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/170969793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa08fd849-4d0e-4cfa-9b80-c8c3cf1c0601_1441x570.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>Gartner, meanwhile, as reported today in the NYT, is forecasting an \u201ctrough of disillusionment\u201d<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!2LMD!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431e685-4d99-4d8f-8ec4-95364ce27088_1246x246.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/e431e685-4d99-4d8f-8ec4-95364ce27088_1246.jpeg\" width=\"1246\" height=\"246\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/e431e685-4d99-4d8f-8ec4-95364ce27088_1246x246.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:246,&quot;width&quot;:1246,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:98906,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/170969793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe431e685-4d99-4d8f-8ec4-95364ce27088_1246x246.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\" title=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>\u201cGary Marcus was right\u201d memes, none of which reflect well on OpenAI, were everywhere. A reader of this Substack even went to so far to an invent a hilarious Gary Marcus Apology Form.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!6aUE!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1dc8d0d-cf47-45ee-a520-7f7614f5458b_1206x1438.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/b1dc8d0d-cf47-45ee-a520-7f7614f5458b_1206.jpeg\" width=\"1206\" height=\"1438\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/b1dc8d0d-cf47-45ee-a520-7f7614f5458b_1206x1438.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1438,&quot;width&quot;:1206,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:477224,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/170969793?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb1dc8d0d-cf47-45ee-a520-7f7614f5458b_1206x1438.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>I have rarely seen a double reversal of fortune (OpenAI down, me up) so swift.<\/p>\n<p>All that said, I was never in reality \u201cstanding alone\u201d; I surely took more heat than anyone else but dozens if not hundreds of others spoke out against the scaling-\u00fcber-alles hypothesis over the years. Indeed, although it did not get the press it deserved, recent <a href=\"https:\/\/aaai.org\" rel=\"nofollow noopener\" target=\"_blank\">AAAI<\/a> survey showed that <a href=\"https:\/\/aaai.org\/wp-content\/uploads\/2025\/03\/AAAI-2025-PresPanel-Report-Digital-3.7.25.pdf\" rel=\"nofollow noopener\" target=\"_blank\">the vast majority of academic AI researchers  doubted that scaling would get us all the way to AGI<\/a>. They were right.<\/p>\n<p>At last this is more widely known.<\/p>\n<p>\u00a7<\/p>\n<p>I would never count Sam Altman fully out, and increasingly skeptical mainstream media coverage is not the same as, for example a change in investor opinion dramatic enough to cause a bubble to deflate. Time will tell how this all plays out.  But already some are wondering whether GPT-5\u2019s disappointments could spark a new <a href=\"https:\/\/en.wikipedia.org\/wiki\/AI_winter\" rel=\"nofollow noopener\" target=\"_blank\">AI winter<\/a>. There is a real sense in which GPT-5 is starting to look Altman\u2019s <a href=\"https:\/\/en.wikipedia.org\/wiki\/Battle_of_Waterloo\" rel=\"nofollow noopener\" target=\"_blank\">Waterloo<\/a>. (Or if it is Moby Dick you prefer, GPT-5 may be his white whale). <\/p>\n<p>Three years of hype add up, and Altman simply could not deliver what he (over)promised. GPT-5 was his primary mission as a leader, and he couldn\u2019t get there convincingly. In no way was it AGI. Some customers even wanted the old models back. Almost nobody felt satisfied.<\/p>\n<p>This raises questions about the technology, about the company\u2019s research prowess, and about Altman himself.  Was he bullshitting all this time that he told us that <a href=\"https:\/\/arstechnica.com\/information-technology\/2025\/01\/sam-altman-says-we-are-now-confident-we-know-how-to-build-agi\/\" rel=\"nofollow noopener\" target=\"_blank\">the company knew how to build AGI<\/a>? <\/p>\n<p>Altman didn\u2019t look a lot better telling CNBC that AGI \u201cis not a super useful term\u201d  days after the disappointments of GPT-5, when he had been hyping AGI for years and \u2014 literally just a few days earlier\u2014he said claimed that GPT-5 was a \u201csignificant step along our path toward AGI\u201d. Two years ago, many treated Altman like an oracle; in the eyes of many, Altman now looks more like a snake oil salesman.<\/p>\n<p>And of course, it is not just about Altman; others have been stalking the same whale; and nobody has yet delivered. Models like  Grok-4 and Llama-4 have also underwhelmed.<\/p>\n<p>\u00a7<\/p>\n<p>What\u2019s the moral of this story?  <\/p>\n<p>Science is not a popularity contest; you can\u2019t bully your way to truth.  And you can\u2019t make AI better if you drown out the critics and keep pouring good money after bad. Science simply cannot advance without people sticking to the truth even in the face of opposition.<\/p>\n<p>The good news here is that science is self-correcting; new approaches will rise again from the ashes.  And AGI\u2014hopefully safe, trustworthy AGI\u2013 will eventually come. Maybe in the next decade.<\/p>\n<p>But between the disappointments of GPT-5 and a new study from METR that shows that <a href=\"https:\/\/x.com\/garymarcus\/status\/1955754567242801500?s=61\" rel=\"nofollow\">LLMs do markedly better on coding benchmarks than in real-word practice<\/a>, I think it is safe to say that LLMs won\u2019t lead the way. And at last that fact is starting to become widely understood. <\/p>\n<p>As people begin to recognize that truth (the bitter lesson about limits <a href=\"https:\/\/www.cs.utexas.edu\/~eunsol\/courses\/data\/bitter_lesson.pdf\" rel=\"nofollow noopener\" target=\"_blank\">of Sutton\u2019s <\/a><a href=\"https:\/\/www.cs.utexas.edu\/~eunsol\/courses\/data\/bitter_lesson.pdf\" rel=\"nofollow noopener\" target=\"_blank\">The Bitter Lesson<\/a>), scientists will start to charts new paths. One of the new paths may even give rise to what we so desperately need: AI that we can trust. <\/p>\n<p>I can\u2019t wait to see a whole new batch of discoveries, with minds at last wide open.<\/p>\n<p>Gary Marcus appreciates the support of loyal readers of this newsletter, who stood behind him during darker times.  Thank you!<\/p>\n","protected":false},"excerpt":{"rendered":"For the last several years, OpenAI has received more press than God. GPT-3, which most people hardly remember,&hellip;\n","protected":false},"author":2,"featured_media":70395,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-70394","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/70394","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=70394"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/70394\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/70395"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=70394"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=70394"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=70394"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}