{"id":316048,"date":"2025-11-29T08:03:24","date_gmt":"2025-11-29T08:03:24","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/316048\/"},"modified":"2025-11-29T08:03:24","modified_gmt":"2025-11-29T08:03:24","slug":"large-language-models-will-never-be-intelligent-expert-says","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/316048\/","title":{"rendered":"Large Language Models Will Never Be Intelligent, Expert Says"},"content":{"rendered":"<p class=\"pw-incontent-excluded article-paragraph skip\">Are tech companies on the verge of creating thinking machines with their tremendous AI models, as top executives claim they are? Not according to one expert.<\/p>\n<p class=\"article-paragraph skip\">We humans tend to associate language with intelligence. We tend to be compelled by those with greater linguistic skills as orators or writers.\u00a0<\/p>\n<p class=\"article-paragraph skip\">But the latest research suggests that language isn\u2019t the same as intelligence, says Benjamin Riley, founder of the venture Cognitive Resonance, in a <a href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\/827820\/large-language-models-ai-intelligence-neuroscience-problems\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">essay for The Verge<\/a>. And that\u2019s bad news for the AI industry, which is predicating its hopes and dreams of creating an all-knowing artificial general intelligence, or AGI, on the large language model architecture it\u2019s already using.<\/p>\n<p class=\"article-paragraph skip\">\u201cThe problem is that according to current neuroscience, human thinking is largely independent of human language \u2014 and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own,\u201d Riley wrote. \u201cWe use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.\u201d<\/p>\n<p class=\"article-paragraph skip\">AGI, to elaborate, would be an all-knowing AI system that equals or exceeds human cognition in a wide variety of tasks. But in practice, it\u2019s often envisioned as helping solve all the biggest problems humankind can\u2019t, from cancer to climate change. And by saying they\u2019re creating one, AI leaders can justify the industry\u2019s <a href=\"https:\/\/futurism.com\/future-society\/entire-economy-ai-bubble\" rel=\"nofollow noopener\" target=\"_blank\">exorbitant spending<\/a> and <a href=\"https:\/\/futurism.com\/ai-pollution-carbon-energy\" rel=\"nofollow noopener\" target=\"_blank\">catastrophic environmental impact<\/a>.<\/p>\n<p class=\"article-paragraph skip\">Part of the reason why AI capex has been so out of control is the obsession with scaling: by furnishing the AI models with more data and powering them with ever growing-numbers of GPUs, AI companies have made their models better problem solvers and more humanlike in their ability to hold a conversation.<\/p>\n<p class=\"article-paragraph skip\">But \u201cLLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build,\u201d Riley wrote.<\/p>\n<p class=\"article-paragraph skip\">If language were essential to thinking, then taking it away should take away our ability to think. But this doesn\u2019t happen, Riley points out, citing decades of research <a href=\"https:\/\/gwern.net\/doc\/psychology\/linguistics\/2024-fedorenko.pdf\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">summarized in a commentary<\/a> published in Nature last year.<\/p>\n<p class=\"article-paragraph skip\">For one, functional magnetic resonance imaging (fMRI) of human brains has shown that distinct parts of the brain are activated during different cognitive activities, Riley notes. We\u2019re not recruiting the same region of neurons when pondering a math problem versus a language one. Meanwhile, studies of people who lost their language abilities showed that their ability to think was largely unimpaired, since they could still solve math problems, follow nonverbal instructions, and understand other peoples\u2019 emotions.<\/p>\n<p class=\"article-paragraph skip\">Even some leading AI figures are skeptical of LLMs. Most famous of all is the Turing Award winner and \u201cgodfather\u201d of modern AI Yann LeCun, who until recently was Meta\u2019s top AI scientist. LeCun has long argued that LLMs will never reach general intelligence, and instead believes in pursuing so-called \u201cworld\u201d models that are designed to understand the three dimensional world by training them on a variety of physical data, rather than just language. It\u2019s likely that this view led to his recent departure; despite LeCun\u2019s position, Meta CEO Mark Zuckerberg has pivoted to <a href=\"https:\/\/www.cnbc.com\/2025\/06\/30\/mark-zuckerberg-creating-meta-superintelligence-labs-read-the-memo.html\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">pouring billions of dollars into a new AI division<\/a> for creating an artificial \u201csuperintelligence\u201d using LLM technology.<\/p>\n<p class=\"article-paragraph skip\">Other research adds to the idea that LLMs have a hard ceiling. In a <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/jocb.70077\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">new analysis<\/a> published in the Journal of Creative Behavior, a researcher used a mathematical formula for determining the limits of AI \u201ccreativity,\u201d with damning results. Because LLMs are a probabilistic system, they reach a point where they are no longer capable of generating novel and unique outputs that aren\u2019t nonsensical. As a result, the study concluded that even the best AI systems will never be anything more than serviceable artists that write you a nice wordy email.<\/p>\n<p class=\"article-paragraph skip\">\u201cWhile AI can mimic creative behavior \u2014 quite convincingly at times \u2014 its actual creative capacity is capped at the level of an average human and can never reach professional or expert standards under current design principles,\u201d study author David H Cropley, a professor of engineering innovation at the University of South Australia, said in a <a href=\"https:\/\/www.psypost.org\/a-mathematical-ceiling-limits-generative-ai-to-amateur-level-creativity\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">statement<\/a> about the work.<\/p>\n<p class=\"article-paragraph skip\">\u201cA skilled writer, artist or designer can occasionally produce something truly original and effective,\u201d Cropley added. \u201cAn LLM never will. It will always produce something average, and if industries rely too heavily on it, they will end up with formulaic, repetitive work.\u201d<\/p>\n<p class=\"article-paragraph skip\">That isn\u2019t a promising portent if LLM-powered AI is supposed think up new innovations and push the envelope of our understanding of the world. How will it invent \u201cnew physics,\u201d as Elon Musk says it will, or solve the climate crisis, as OpenAI CEO Sam Altman <a href=\"https:\/\/ia.samaltman.com\" rel=\"nofollow noreferrer noopener\" target=\"_blank\">has suggested<\/a>, if the tech struggles to string together new sentences that aren\u2019t based on preexisting writing?<\/p>\n<p class=\"article-paragraph skip\">\u201cYes, an AI system might remix and recycle our knowledge in interesting ways,\u201d Riley writes. \u201cBut that\u2019s all it will be able to do. It will be forever trapped in the vocabulary we\u2019ve encoded in our data and trained it upon \u2014 a dead-metaphor machine.\u201d<\/p>\n<p class=\"article-paragraph skip\">More on AI: <a href=\"https:\/\/futurism.com\/artificial-intelligence\/godfather-ai-breakdown-society\" rel=\"nofollow noopener\" target=\"_blank\">Godfather of AI Predicts Total Breakdown of Society<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Are tech companies on the verge of creating thinking machines with their tremendous AI models, as top executives&hellip;\n","protected":false},"author":2,"featured_media":316049,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-316048","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/316048","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=316048"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/316048\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/316049"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=316048"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=316048"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=316048"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}