{"id":158798,"date":"2025-11-25T12:50:10","date_gmt":"2025-11-25T12:50:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/158798\/"},"modified":"2025-11-25T12:50:10","modified_gmt":"2025-11-25T12:50:10","slug":"the-ai-boom-is-based-on-a-fundamental-mistake","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/158798\/","title":{"rendered":"The AI boom is based on a fundamental mistake"},"content":{"rendered":"<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdy6 _17nnmdy5 _1xwtict1\">\u201cDeveloping superintelligence is now in sight,\u201d <a href=\"https:\/\/www.meta.com\/superintelligence\/?srsltid=AfmBOordifyF0yEAoSvgERn-1kSfAWRL9lMWOGQGF_B0fKHcWf7onC_L\" rel=\"nofollow noopener\" target=\"_blank\">says<\/a> Mark Zuckerberg, heralding the \u201ccreation and discovery of new things that aren\u2019t imaginable today.\u201d Powerful AI \u201cmay come as soon as 2026 [and will be] smarter than a Nobel Prize winner across most relevant fields,\u201d <a href=\"https:\/\/www.darioamodei.com\/essay\/machines-of-loving-grace\" rel=\"nofollow noopener\" target=\"_blank\">says<\/a> Dario Amodei, offering the doubling of human lifespans or even \u201cescape velocity\u201d from death itself. \u201cWe are now confident we know how to build AGI,\u201d <a href=\"https:\/\/blog.samaltman.com\/reflections\" rel=\"nofollow noopener\" target=\"_blank\">says<\/a> Sam Altman, referring to the industry\u2019s holy grail of artificial general intelligence \u2014 and soon superintelligent AI \u201ccould massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Should we believe them? Not if we trust the science of human intelligence, and simply look at the AI systems these companies have produced so far.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The common feature cutting across chatbots such as OpenAI\u2019s ChatGPT, Anthropic\u2019s Claude, Google\u2019s Gemini, and whatever Meta is calling its AI product this week are that they are all primarily \u201clarge language models.\u201d Fundamentally, they are based on gathering an extraordinary amount of linguistic data (much of it codified on the internet), finding correlations between words (more accurately, sub-words called \u201ctokens\u201d), and then predicting what output should follow given a particular prompt as input. For all the alleged complexity of generative AI, at their core they really are models of language.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The problem is that according to current neuroscience, human thinking is largely independent of human language \u2014 and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">The AI hype machine relentlessly promotes the idea that we\u2019re on the verge of creating something as intelligent as humans, or even \u201csuperintelligence\u201d that will dwarf our own cognitive capacities. If we gather tons of data about the world, and combine this with ever more powerful computing power (read: Nvidia chips) to improve our statistical correlations, then presto, we\u2019ll have AGI. Scaling is all we need.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But this theory is seriously scientifically flawed. LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning, no matter how many data centers we build.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">We use language to think, but that does not make language the same as thought<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Last year, three scientists published a <a href=\"https:\/\/gwern.net\/doc\/psychology\/linguistics\/2024-fedorenko.pdf\" rel=\"nofollow noopener\" target=\"_blank\">commentary<\/a> in the journal Nature titled, with admirable clarity, \u201cLanguage is primarily a tool for communication rather than thought.\u201d Co-authored by Evelina Fedorenko (MIT), Steven T. Piantadosi (UC Berkeley) and Edward A.F. Gibson (MIT), the article is a tour de force summary of decades of scientific research regarding the relationship between language and thought, and has two purposes: one, to tear down the notion that language gives rise to our ability to think and reason, and two, to build up the idea that language evolved as a cultural tool we use to share our thoughts with one another.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Let\u2019s take each of these claims in turn.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">When we contemplate our own thinking, it often feels as if we are thinking in a particular language, and therefore because of our language. But if it were true that language is essential to thought, then taking away language should likewise take away our ability to think. This does not happen. I repeat: Taking away language does not take away our ability to think. And we know this for a couple of empirical reasons.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">First, using advanced functional magnetic resonance imaging (fMRI), we can see different parts of the human brain activating when we engage in different mental activities. As it turns out, when we engage in various cognitive activities \u2014 solving a math problem, say, or trying understand what is happening in the mind of another human \u2014 different parts of our brains \u201clight up\u201d as part of networks that are distinct from our linguistic ability:<\/p>\n<p><a class=\"kqz8fh1\" href=\"https:\/\/platform.theverge.com\/wp-content\/uploads\/sites\/2\/2025\/11\/Screenshot-2025-11-24-at-2.11.48%E2%80%AFPM.png?quality=90&amp;strip=all&amp;crop=0,0,100,100\" data-pswp-height=\"700\" data-pswp-width=\"2074\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img alt=\"A set of images of the brain, with different parts lighting up, labeled \u201clanguage network,\u201d \u201cmultiple demand network,\u201d and \u201ctheory of mind network,\u201d all of which support different functions.\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/Screenshot-2025-11-24-at-2.11.48\u202fPM.png\"\/><\/a><\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Second, studies of humans who have lost their language abilities due to brain damage or other disorders demonstrate conclusively that this loss does not fundamentally impair the general ability to think. \u201cThe evidence is unequivocal,\u201d Fedorenko et al. state, that \u201cthere are many cases of individuals with severe linguistic impairments \u2026 who nevertheless exhibit intact abilities to engage in many forms of thought.\u201d These people can solve math problems, follow nonverbal instructions, understand the motivation of others, and engage in reasoning \u2014 including formal logical reasoning and causal reasoning about the world.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">If you\u2019d like to independently investigate this for yourself, here\u2019s one simple way: Find a baby and watch them (when they\u2019re not napping). What you will no doubt observe is a tiny human curiously exploring the world around them, playing with objects, making noises, imitating faces, and otherwise learning from interactions and experiences. \u201cStudies suggest that children learn about the world in much the same way that scientists do\u2014by conducting experiments, analyzing statistics, and forming intuitive theories of the physical, biological and psychological realms,\u201d the cognitive scientist Alison Gopnik <a href=\"https:\/\/alisongopnik.com\/Papers_Alison\/sciam-Gopnik.pdf\" rel=\"nofollow noopener\" target=\"_blank\">notes<\/a>, all before learning how to talk. Babies may not yet be able to use language, but of course they are thinking! And every parent knows the joy of watching their child\u2019s cognition emerge over time, at least until the teen years.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">So, scientifically speaking, language is only one aspect of human thinking, and much of our intelligence involves our non-linguistic capacities. Why then do so many of us intuitively feel otherwise?<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">This brings us to the second major claim in the Nature article by Fedorenko et al., that language is primarily a tool we use to share our thoughts with one another \u2014 an \u201cefficient communication code,\u201d in their words. This is evidenced by the fact that, across the wide diversity of human languages, they share certain common features that make them \u201ceasy to produce, easy to learn and understand, concise and efficient for use, and robust to noise.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup qnnwq2 _1xwtict9\">Even parts of the AI industry are growing critical of LLMs<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Without diving too deep into the linguistic weeds here, the upshot is that human beings, as a species, benefit tremendously from using language to share our knowledge, both in the present and across generations. Understood this way, language is what the cognitive scientist Cecilia Heyes calls a \u201c<a href=\"https:\/\/www.educationnext.org\/cognitive-gadgets-theory-might-change-your-mind-literally\/\" rel=\"nofollow noopener\" target=\"_blank\">cognitive gadget<\/a>\u201d that \u201cenables humans to learn from others with extraordinary efficiency, fidelity, and precision.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Our cognition improves because of language \u2014 but it\u2019s not created or defined by it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">But take away language from a large language model, and you are left with literally nothing at all.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">An AI enthusiast might argue that human-level intelligence doesn\u2019t need to necessarily function in the same way as human cognition. AI models have surpassed human performance in activities like chess using processes that differ from what we do, so perhaps they could become superintelligent through some unique method based on drawing correlations from training data.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Maybe! But there\u2019s no obvious reason to think we can get to general intelligence \u2014 not improving narrowly defined tasks \u2014through text-based training. After all, humans possess all sorts of knowledge that is not easily encapsulated in linguistic data \u2014 and if you doubt this, think about how you know how to ride a bike.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">In fact, within the AI research community there is growing awareness that LLMs are, in and of themselves, insufficient models of human intelligence. For example, Yann LeCun, a Turing Award winner for his AI research and a <a href=\"https:\/\/www.wsj.com\/tech\/ai\/yann-lecun-ai-meta-0058b13c\" rel=\"nofollow noopener\" target=\"_blank\">prominent skeptic of LLMs<\/a>, left his role at Meta last week to found an AI startup developing what are dubbed world models: \u201c\u200b\u200bsystems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.\u201d And recently, a group of prominent AI scientists and \u201cthought leaders\u201d \u2014 including Yoshua Bengio (another Turing Award winner), former Google CEO Eric Schmidt, and noted AI skeptic Gary Marcus \u2014 <a href=\"https:\/\/www.agidefinition.ai\/paper.pdf\" rel=\"nofollow noopener\" target=\"_blank\">coalesced<\/a> around a working definition of AGI as \u201cAI that can match or exceed the cognitive versatility and proficiency of a well-educated adult\u201d (emphasis added). Rather than treating intelligence as a \u201cmonolithic capacity,\u201d they propose instead we embrace a model of both human and artificial cognition that reflects \u201ca complex architecture composed of many distinct abilities.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">They argue intelligence looks something like this:<\/p>\n<p><a class=\"kqz8fh1\" href=\"https:\/\/platform.theverge.com\/wp-content\/uploads\/sites\/2\/2025\/11\/Screenshot-2025-11-24-at-2.09.06%E2%80%AFPM.png?quality=90&amp;strip=all&amp;crop=0,0,100,100\" data-pswp-height=\"1200\" data-pswp-width=\"1694\" target=\"_blank\" rel=\"noreferrer nofollow noopener\"><img alt=\"A chart that looks like a spiderweb, with different axes labeled \u201cspeed,\u201d \u201cknowledge,\u201d \u201creading &amp; writing,\u201d \u201cmath,\u201d \u201creasoning,\u201d \u201cworking memory,\u201d \u201cmemory storage,\u201d \u201cmemory retrieval,\u201d \u201cvisual,\u201d and \u201cauditory.\u201d\" data-chromatic=\"ignore\" loading=\"lazy\" decoding=\"async\" data-nimg=\"fill\" class=\"x271pn0\" style=\"position:absolute;height:100%;width:100%;left:0;top:0;right:0;bottom:0;color:transparent;background-size:cover;background-position:50% 50%;background-repeat:no-repeat;background-image:url(&quot;data:image\/svg+xml;charset=utf-8,%3Csvg xmlns='http:\/\/www.w3.org\/2000\/svg' %3E%3Cfilter id='b' color-interpolation-filters='sRGB'%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3CfeColorMatrix values='1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 100 -1' result='s'\/%3E%3CfeFlood x='0' y='0' width='100%25' height='100%25'\/%3E%3CfeComposite operator='out' in='s'\/%3E%3CfeComposite in2='SourceGraphic'\/%3E%3CfeGaussianBlur stdDeviation='20'\/%3E%3C\/filter%3E%3Cimage width='100%25' height='100%25' x='0' y='0' preserveAspectRatio='none' style='filter: url(%23b);' href='data:image\/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mN8+R8AAtcB6oaHtZcAAAAASUVORK5CYII='\/%3E%3C\/svg%3E&quot;)\"   src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/Screenshot-2025-11-24-at-2.09.06\u202fPM.png\"\/><\/a><\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Is this progress? Perhaps, insofar as this moves us past the silly quest for more training data to feed into server racks. But there are still some problems. Can we really aggregate individual cognitive capabilities and deem the resulting sum to be general intelligence? How do we define what weights they should be given, and what capabilities to include and exclude? What exactly do we mean by \u201cknowledge\u201d or \u201cspeed,\u201d and in what contexts? And while these experts agree simply scaling language models won\u2019t get us there, their proposed paths forward are all over the place \u2014 they\u2019re offering a better goalpost, not a roadmap for reaching it.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Whatever the method, let\u2019s assume that in the not-too-distant future, we succeed in building an AI system that performs admirably well across the broad range of cognitive challenging tasks reflected in this spiderweb graphic. Will we have achieved building an AI system that possesses the sort of intelligence that will lead to transformative scientific discoveries, as the Big Tech CEOs are promising? Not necessarily. Because there\u2019s one final hurdle: Even replicating the way humans currently think doesn\u2019t guarantee AI systems can make the cognitive leaps humanity achieves.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">We can credit Thomas Kuhn and his book The Structure of Scientific Revolutions for our notion of \u201cscientific paradigms,\u201d the basic frameworks for how we understand our world at any given time. He argued these paradigms \u201cshift\u201d not as the result of iterative experimentation, but rather when new questions and ideas emerge that no longer fit within our existing scientific descriptions of the world. Einstein, for example, conceived of relativity before any empirical evidence confirmed it. Building off this notion, the philosopher Richard Rorty contended that it is when scientists and artists become dissatisfied with existing paradigms (or vocabularies, as he called them) that they create new metaphors that give rise to new descriptions of the world \u2014 and if these new ideas are useful, they then become our common understanding of what is true. As such, he argued, \u201ccommon sense is a collection of dead metaphors.\u201d<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they\u2019re being fed \u2014 and by extension, to make great scientific and creative leaps.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _17nnmdya _1xwtict1\">Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that\u2019s all it will be able to do. It will be forever trapped in the vocabulary we\u2019ve encoded in our data and trained it upon \u2014 a dead-metaphor machine. And actual humans \u2014 thinking and reasoning and using language to communicate our thoughts to one another \u2014 will remain at the forefront of transforming our understanding of the world.<\/p>\n<p class=\"duet--article--dangerously-set-cms-markup duet--article--standard-paragraph _1ymtmqpi _17nnmdy1 _17nnmdy0 _1xwtict1\">Benjamin Riley is the founder of <a href=\"https:\/\/www.cognitiveresonance.net\/\" rel=\"nofollow noopener\" target=\"_blank\">Cognitive Resonance<\/a>, a new venture dedicated to helping people understand human cognition and generative AI. Portions of this essay initially appeared on the Cognitive Resonance <a href=\"https:\/\/buildcognitiveresonance.substack.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Substack<\/a>. <\/p>\n<p>Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Benjamin RileyCloseBenjamin Riley<\/p>\n<p class=\"fv263x1\">Posts from this author will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/authors\/benjaminriley\" rel=\"nofollow noopener\" target=\"_blank\">See All by Benjamin Riley<\/a><\/p>\n<p>AICloseAI<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/ai-artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">See All AI<\/a><\/p>\n<p>ReportCloseReport<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/report\" rel=\"nofollow noopener\" target=\"_blank\">See All Report<\/a><\/p>\n<p>ScienceCloseScience<\/p>\n<p class=\"fv263x1\">Posts from this topic will be added to your daily email digest and your homepage feed.<\/p>\n<p>FollowFollow<\/p>\n<p class=\"fv263x4\"><a class=\"fv263x5\" href=\"https:\/\/www.theverge.com\/science\" rel=\"nofollow noopener\" target=\"_blank\">See All Science<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"\u201cDeveloping superintelligence is now in sight,\u201d says Mark Zuckerberg, heralding the \u201ccreation and discovery of new things that&hellip;\n","protected":false},"author":2,"featured_media":158799,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,1094,82,80],"class_list":{"0":"post-158798","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-report","14":"tag-science","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/158798","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=158798"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/158798\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/158799"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=158798"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=158798"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=158798"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}