{"id":220584,"date":"2025-10-17T16:04:11","date_gmt":"2025-10-17T16:04:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/220584\/"},"modified":"2025-10-17T16:04:11","modified_gmt":"2025-10-17T16:04:11","slug":"is-agi-the-right-goal-for-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/220584\/","title":{"rendered":"Is AGI the right goal for AI?"},"content":{"rendered":"<p>I am someone who believes that AGI (Artificial General Intelligence) could change the world, but also someone who thinks that LLMs are not the right path there, morally or technically. In <a href=\"https:\/\/www.nytimes.com\/2025\/10\/16\/opinion\/ai-specialized-potential.html?unlocked_article_code=1.t08.B01D.pFP2y6z-d7vN&amp;smid=nytcore-ios-share&amp;referringSource=articleShare\" rel=\"nofollow noopener\" target=\"_blank\">an OpEd today in <\/a><a href=\"https:\/\/www.nytimes.com\/2025\/10\/16\/opinion\/ai-specialized-potential.html?unlocked_article_code=1.t08.B01D.pFP2y6z-d7vN&amp;smid=nytcore-ios-share&amp;referringSource=articleShare\" rel=\"nofollow noopener\" target=\"_blank\">The New York Times<\/a>, I argue  that at least for now, AGI is maybe not the right near-term goal:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!0vuc!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dad87d8-0a78-4a45-809c-cffc997f3d21_1413x2160.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/7dad87d8-0a78-4a45-809c-cffc997f3d21_1413.jpeg\" width=\"1413\" height=\"2160\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/7dad87d8-0a78-4a45-809c-cffc997f3d21_1413x2160.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2160,&quot;width&quot;:1413,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3920330,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/176331613?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7dad87d8-0a78-4a45-809c-cffc997f3d21_1413x2160.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   fetchpriority=\"high\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>The crux of <a href=\"https:\/\/www.nytimes.com\/2025\/10\/16\/opinion\/ai-specialized-potential.html?unlocked_article_code=1.t08.B01D.pFP2y6z-d7vN&amp;smid=nytcore-ios-share&amp;referringSource=articleShare\" rel=\"nofollow noopener\" target=\"_blank\">this essay<\/a>, which draws on positive examples from Waymo and GoogleDeepmind is this:<\/p>\n<p>[LLMs] have always been prone to hallucinations and errors. Those obstacles may be one reason generative A.I. <a href=\"https:\/\/www.nytimes.com\/2025\/08\/13\/business\/ai-business-payoff-lags.html\" rel=\"nofollow noopener\" target=\"_blank\">hasn\u2019t led<\/a> to the skyrocketing in profits and productivity that many in the tech industry predicted. A recent study run by M.I.T.\u2019s NANDA Initiative <a href=\"https:\/\/mlq.ai\/media\/quarterly_decks\/v0.1_State_of_AI_in_Business_2025_Report.pdf\" rel=\"nofollow noopener\" target=\"_blank\">found<\/a> that 95 percent of companies that did A.I. pilot studies found little or no return on their investment. A recent financial analysis <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2025-09-23\/an-800-billion-revenue-shortfall-threatens-ai-future-bain-says\" rel=\"nofollow noopener\" target=\"_blank\">projects<\/a> an estimated shortfall of $800 billion in revenue for A.I. companies by the end of 2030.<\/p>\n<p>If the strengths of A.I. are to truly be harnessed, the tech industry should stop focusing so heavily on these one-size-fits-all tools, and instead concentrate on narrow, specialized A.I. tools engineered for particular problems<\/p>\n<p>\u2026<\/p>\n<p>Right now, it feels as if Big Tech is throwing general-purpose A.I. spaghetti at the wall and hoping that nothing truly terrible sticks. As the A.I. pioneer Yoshua Bengio has <a href=\"https:\/\/time.com\/7283507\/safer-ai-development\/\" rel=\"nofollow noopener\" target=\"_blank\">recently emphasized<\/a>, advancing generalized A.I. systems that can exhibit greater autonomy isn\u2019t necessarily aligned with human interests. Humanity would be better served by labs devoting more resources on building specialized tools for science, medicine, technology and education.<\/p>\n<p>Hope you have time to read <a href=\"https:\/\/www.nytimes.com\/2025\/10\/16\/opinion\/ai-specialized-potential.html?unlocked_article_code=1.t08.B01D.pFP2y6z-d7vN&amp;smid=nytcore-ios-share&amp;referringSource=articleShare\" rel=\"nofollow noopener\" target=\"_blank\">the whole essay<\/a>; it\u2019s short and sweet, touching on some deep ideas in cognitive science and why they matter so much in the real world, all in a remarkably readable way. <\/p>\n<p>(Shout out to my editor Neel Patel, for helping make the essay read so elegantly!)<\/p>\n<p>\u00a7<\/p>\n<p>By coincidence, though, that\u2019s not the only new essay I have out today about AGI. The other one (in which I played just a tiny role) is with AI safety researcher Dan Hendrycks and a large cast of of 30-some eminent researchers, including Yoshua Bengio, <a href=\"https:\/\/www.agidefinition.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">in which we try to <\/a><a href=\"https:\/\/www.agidefinition.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">define<\/a><a href=\"https:\/\/www.agidefinition.ai\/\" rel=\"nofollow noopener\" target=\"_blank\"> AGI<\/a>.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!cvDr!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ab4e78-e810-46d9-b588-5738ca2aa770_1314x1350.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/53ab4e78-e810-46d9-b588-5738ca2aa770_1314.jpeg\" width=\"1314\" height=\"1350\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/53ab4e78-e810-46d9-b588-5738ca2aa770_1314x1350.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1350,&quot;width&quot;:1314,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:311209,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/176331613?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F53ab4e78-e810-46d9-b588-5738ca2aa770_1314x1350.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>\u00a7<\/p>\n<p>We land on a definition of AGI (\u201cAGI is an AI that can match or exceed the cognitive versatility and proficiency of a well-educated adult\u201d) that is actually extremely close to <a href=\"https:\/\/garymarcus.substack.com\/p\/dear-elon-musk-here-are-five-things?r=26o3px&amp;utm_medium=ios&amp;triedRedirect=true\" rel=\"nofollow noopener\" target=\"_blank\">what I proposed here a few years ago<\/a> (\u201cany intelligence &#8230; that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence\u201d), based on conversations with the originators of the term, Shane Legg and Ben Goertzel. (Peter Voss, the other coiner of the term, signed on afterwards.)<\/p>\n<p>I don\u2019t agree with every detail of the new paper (that\u2019s not practical when you have over 30 authors), but signed on because I strongly support the paper\u2019s goal of trying better articulate what it means to have a mind with the flexibility and generality of a human mind. <\/p>\n<p>The main alternative \u2014 definitions of AGI that are in terms of economic criteria like profits (<a href=\"https:\/\/www.theinformation.com\/articles\/microsoft-and-openai-wrangle-over-terms-of-their-blockbuster-partnership?rc=onjv7n\" rel=\"nofollow noopener\" target=\"_blank\">AGI as being $100 billion in profits<\/a>) or <a href=\"https:\/\/openai.com\/charter\/\" rel=\"nofollow noopener\" target=\"_blank\">percentage of jobs displaced<\/a>\u2014 seem to me to be <a href=\"https:\/\/garymarcus.substack.com\/p\/the-five-stages-of-agi-grief?utm_source=publication-search\" rel=\"nofollow noopener\" target=\"_blank\">fundamentally misguided<\/a>. Pure profits is obviously a nonstarter (iPhones have earned Apple hundreds of billions in profits but that doesn\u2019t make them AGI). But even the second is a bit of red herring, mixing together facts about capital and wages with cognition. Fundamentally,  efforts to define AGI in economic terms distract from the core cognitive issues about what it means to build a powerful mind.<\/p>\n<p>The new paper, which tries to break down cognition into a wide number of subareas, is a welcome corrective.<\/p>\n<p>In particular it tries to break down cognition into many facets. Cognition is not one thing but many, as Chaz Firestone and Brian Scholl once wrote. This is one attempt to articulate that, and as the paper notes, current techniques capture only a fraction of what ultimately needs to be captured: <\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!0nFJ!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5127292a-6b0d-45f2-970f-0bb618cda80b_1619x1101.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/10\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/5127292a-6b0d-45f2-970f-0bb618cda80b_1619.jpeg\" width=\"1456\" height=\"990\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/5127292a-6b0d-45f2-970f-0bb618cda80b_1619x1101.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:990,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:349370,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/176331613?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5127292a-6b0d-45f2-970f-0bb618cda80b_1619x1101.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>One could argue with this breakdown, but it\u2019s a good opening bid. I expect lots of commentaries to take other cuts on the problem, and I am glad this will open those conversations. (I am personally in no way committed to the particulars.)<\/p>\n<p>The weakness in the paper is that is that is built on a bunch of benchmarks, and we all known benchmarks can be gamed. There are also a number of fairly arbitrary decisions that may not stand the test of time (e.g., how much weight to put on each submeasure in the many tests that are aggregated).  The whole thing should be viewed as evolving, with future measures replacing existing ones as researchers devise better benchmarks \u2014 and not as a static checklist. Our current understanding of how to assess intelligence is just a moment in time. I hope the paper won\u2019t accidentally enshrine that particular moment in perpetuity.<\/p>\n<p>As such I certainly don\u2019t think our paper should be seen as the last word here, but I think it does a great job launching an important discussion.<\/p>\n<p>Happy reading!<\/p>\n","protected":false},"excerpt":{"rendered":"I am someone who believes that AGI (Artificial General Intelligence) could change the world, but also someone who&hellip;\n","protected":false},"author":2,"featured_media":220585,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-220584","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/220584","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=220584"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/220584\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/220585"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=220584"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=220584"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=220584"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}