{"id":125679,"date":"2025-09-01T21:04:09","date_gmt":"2025-09-01T21:04:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/125679\/"},"modified":"2025-09-01T21:04:09","modified_gmt":"2025-09-01T21:04:09","slug":"when-ai-freezes-over-psychology-today","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/125679\/","title":{"rendered":"When AI Freezes Over | Psychology Today"},"content":{"rendered":"<p>A phrase I&#8217;ve often clung to regarding <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/artificial-intelligence\" title=\"Psychology Today looks at artificial intelligence\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a> is one that is also cloaked in a bit of techno-mystery. And I bet you&#8217;ve heard it as part of the lexicon of technology and <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/imagination\" title=\"Psychology Today looks at imagination\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">imagination<\/a>: &#8220;emergent abilities.&#8221; It&#8217;s common to hear that large language models (LLMs) have these curious \u201cemergent\u201d behaviors that are often coupled with linguistic partners like scaling and complexity. And yes, I&#8217;m guilty too. <\/p>\n<p>In AI research, this phrase first took off after a <a href=\"https:\/\/arxiv.org\/pdf\/2206.07682\" rel=\"nofollow noopener\" target=\"_blank\">2022 paper<\/a> that described how abilities seem to appear suddenly as models scale and tasks that a small model fails at completely, a larger model suddenly handles with ease. One day a model can\u2019t solve math problems, the next day it can. It\u2019s an irresistible story as machines have their own little Archimedean \u201ceureka!\u201d moments. It&#8217;s almost as if &#8220;<a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/intelligence\" title=\"Psychology Today looks at intelligence\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">intelligence<\/a>&#8221; has suddenly switched on.<\/p>\n<p>But I&#8217;m not buying into the sensation, at least not yet. A newer <a href=\"https:\/\/iopscience.iop.org\/article\/10.1088\/1742-5468\/ade137\" rel=\"nofollow noopener\" target=\"_blank\">2025 study<\/a> suggests we should be more careful. Instead of magical leaps, what we\u2019re seeing looks a lot more like the physics of phase changes.<\/p>\n<p>Ice, Water, and Math<\/p>\n<p>Think about water. At one temperature it\u2019s liquid, at another it\u2019s ice. The molecules don\u2019t become something new\u2014they\u2019re always two hydrogens and an oxygen\u2014but the way they organize shifts dramatically. At the freezing point, hydrogen bonds fail to align molecularly and the those fleeting electrical charges on the hydrogen atoms don&#8217;t align. The result is ice, the same ingredients reorganized into a solid that\u2019s curiously less dense than liquid water. And, yes, there\u2019s even a touch of magic in the science as ice floats. But that magic melts when you learn about <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/abs\/pii\/S129325580300092X\" rel=\"nofollow noopener\" target=\"_blank\">Van der Waals forces<\/a>.<\/p>\n<p>The same kind of shift shows up in LLMs and is often mislabeled as \u201cemergence.\u201d In small models, the easiest strategy is positional, where computation leans on word order and simple statistical shortcuts. It\u2019s an easy trick that works just enough to reduce error. But scale things up by using more parameters and data, and the system reorganizes. The 2025 study by Cui shows that, at a critical threshold, the model shifts into semantic mode and relies on the geometry of meaning in its <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/the-digital-self\/202507\/ais-hidden-geometry-of-thought\" rel=\"nofollow noopener\" target=\"_blank\">high-dimensional vector space<\/a>. It isn\u2019t magic, it\u2019s optimization. Just as water molecules align into a lattice, the model settles into a more stable solution in its mathematical landscape.<\/p>\n<p>The Mirage of \u201cEmergence\u201d<\/p>\n<p>That 2022 paper called these shifts emergent abilities. And yes, tasks like arithmetic or multi-step reasoning can look as though they \u201cswitch on.\u201d But the model hasn\u2019t suddenly \u201cunderstood\u201d arithmetic. What\u2019s happening is that semantic generalization finally outperforms positional shortcuts once scale crosses a threshold. Yes, it&#8217;s a mouthful. But happening here is the computational process that is shifting from a simple &#8220;word position&#8221; in a prompt (like, the cat in the _____) to a complex, <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/the-digital-self\/202507\/ais-hidden-geometry-of-thought\" rel=\"nofollow noopener\" target=\"_blank\">hyperdimensional matrix<\/a> where semantic associations across thousands of dimensions create amazing strength to the computation.<\/p>\n<p>And those sudden jumps? They\u2019re often illusions. On simple pass\/fail tests, a model can look stuck at zero until it finally tips over the line and then it seems to leap forward. In reality, it was improving step by step all along. The so-called \u201clight-bulb moment\u201d is really just a quirk of how we measure progress. No emergence, just math.<\/p>\n<p>Why \u201cEmergence\u201d Is So Seductive<\/p>\n<p>Why does the language of \u201cemergence\u201d stick? Because it borrows from biology and <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/philosophy\" title=\"Psychology Today looks at philosophy\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">philosophy<\/a>. Life \u201cemerges\u201d from chemistry as consciousness \u201cemerges\u201d from neurons. It makes LLMs sound like they\u2019re undergoing cognitive leaps. Some argue emergence is a hallmark of complex systems, and there\u2019s truth to that. So, to a degree, it does capture the idea of surprising shifts.<\/p>\n<p>But we need to be careful. What\u2019s happening here is still math, not mind. Calling it emergence risks sliding into <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/anthropomorphism\" title=\"Psychology Today looks at anthropomorphism\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">anthropomorphism<\/a>, where sudden performance shifts are mistaken for genuine understanding. And it happens all the time.<\/p>\n<p>A Useful Imitation<\/p>\n<p>The 2022 paper gave us the language of \u201cemergence.\u201d The 2025 paper shows that what looks like emergence is really closer to a high-complexity phase change. It&#8217;s the same math and the same machinery. At small scales, positional tricks (word sequence) dominate. At large scales, semantic structures (multidimensional linguistic analysis) win out.<\/p>\n<p>No insight, no spark of consciousness. It&#8217;s just a system reorganizing under new constraints. And this supports my larger thesis: What we\u2019re witnessing isn\u2019t intelligence at all, but <a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/the-digital-self\/202507\/ai-and-the-architecture-of-anti-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">anti-intelligence<\/a>, a powerful, useful imitation that mimics the surface of <a href=\"https:\/\/www.psychologytoday.com\/us\/basics\/cognition\" title=\"Psychology Today looks at cognition\" class=\"basics-link\" hreflang=\"en\" rel=\"nofollow noopener\" target=\"_blank\">cognition<\/a> without the interior substance that only a human mind offers.<\/p>\n<p>Artificial Intelligence Essential Reads<\/p>\n<p>So the next time you hear about an LLM with \u201cemergent ability,\u201d don\u2019t imagine Archimedes leaping from his bath. Picture water freezing. The same molecules, new structure. The same math, new mode. What looks like insight is just another phase of anti-intelligence that is complex, fascinating, even beautiful in its way, but not to be mistaken for a mind.<\/p>\n","protected":false},"excerpt":{"rendered":"A phrase I&#8217;ve often clung to regarding artificial intelligence is one that is also cloaked in a bit&hellip;\n","protected":false},"author":2,"featured_media":125680,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-125679","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/125679","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=125679"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/125679\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/125680"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=125679"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=125679"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=125679"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}