{"id":382941,"date":"2026-04-05T09:22:33","date_gmt":"2026-04-05T09:22:33","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/382941\/"},"modified":"2026-04-05T09:22:33","modified_gmt":"2026-04-05T09:22:33","slug":"ai-body-gap-why-robots-need-internal-feelings-to-be-safe","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/382941\/","title":{"rendered":"AI Body Gap: Why Robots Need &#8220;Internal Feelings&#8221; to be Safe"},"content":{"rendered":"<p>Summary: When you reach for a saltshaker, your brain isn\u2019t just calculating coordinates; it\u2019s listening to your body\u2019s sense of balance, the friction on your skin, and your internal level of thirst or fatigue. A provocative new study argues that current AI models like ChatGPT and Gemini are fundamentally flawed because they lack \u201cinternal embodiment.\u201d<\/p>\n<p>While AI can describe a glass of water perfectly, it has no internal state of \u201cthirst\u201d to regulate its behavior. Researchers argue that without these internal \u201cvulnerabilities\u201d and self-regulators, AI will remain prone to overconfident errors and struggle to truly align with human values.<\/p>\n<p>Key FactsThe Missing Ingredient: The study distinguishes between External Embodiment (interacting with the physical world) and Internal Embodiment (the constant monitoring of internal states like fatigue, uncertainty, or need).The Perceptual Test: Researchers tested leading AI models using \u201cpoint-light displays\u201d (dots that suggest a human figure). While even human newborns recognize the person, AI models failed, sometimes describing the dots as a \u201cconstellation of stars.\u201dSafety via Vulnerability: In humans, the body acts as a built-in safety system. If we are \u201cdepleted\u201d or \u201cuncertain,\u201d our body registers it. AI lacks this \u201cinternal cost,\u201d meaning it has no intrinsic reason to avoid being overconfident when it\u2019s actually guessing.The Dual-Embodiment Framework: UCLA researchers propose a new architecture for AI that tracks \u201csynthetic\u201d internal states\u2014such as processing load and confidence levels\u2014to constrain behavior over time.Moving Beyond Mimicry: The team argues for new benchmarks that measure if an AI can monitor itself and maintain stability, rather than just testing if it can identify objects or pass a bar exam.<\/p>\n<p>Source: UCLA<\/p>\n<p>When a person reaches across a table to pass the salt, their brain is doing something far more complex than recognizing a request and executing a movement. It is drawing on a lifetime of bodily experience \u2014 where their hand is in space, what a saltshaker feels like, the social awareness of who asked and why. In a fraction of a second, their body and brain are working as one.<\/p>\n<p>Today\u2019s most advanced artificial intelligence systems lack such bodily mechanisms and a new study by UCLA Health argues that this has significant implications for how these models behave as well as how safe and trustworthy they can become.<\/p>\n<p>  <img fetchpriority=\"high\" decoding=\"async\" width=\"1200\" height=\"800\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2026\/04\/ai-feeling-neuroscience.jpg\" alt=\"This shows the outline of a robot.\"  \/> Researchers argue that \u201cinternal embodiment\u201d is the next great frontier in creating trustworthy and human-aligned artificial intelligence. Credit: Neuroscience News<\/p>\n<p>In a paper published in the journal\u00a0Neuron, UCLA Health postdoctoral fellow Akila Kadambi and colleagues propose that current AI systems are missing two essential ingredients that humans take for granted: a body that interacts with the physical world and an internal awareness of that body\u2019s own states such as fatigue, uncertainty or physiological need.<\/p>\n<p>The researchers call this combined property \u201cinternal embodiment,\u201d and propose that building functional analogues of it into AI represents one of the most crucial and underexplored frontiers in the field.<\/p>\n<p>\u201cWhile there is a current focus in world modeling on external embodiment, such as our outward interactions with the world, far less attention is given to internal dynamics, or what we term \u2018internal embodiment\u2019. In humans, the body acts as our experiential regulator of the world, as a kind of built-in safety system,\u201d said Akila Kadambi, a postdoctoral fellow in the Department of Psychiatry and Biobehavioral Sciences at UCLA\u2019s David Geffen School of Medicine and the paper\u2019s first author.<\/p>\n<p>\u201cIf you\u2019re uncertain, if you\u2019re depleted, if something conflicts with your survival, your body registers that. AI systems right now have no equivalent. They can sound experiential, whether they should be or not, and that\u2019s a real problem for many reasons, especially when these systems are being deployed in consequential settings.\u201d<\/p>\n<p>The AI body gap<\/p>\n<p>The paper focuses on multimodal large language models, which is the class of technology that powers tools such as ChatGPT and Google\u2019s Gemini. While these systems can process and generate text, images and video to describe a cup of water, for example, they cannot know what it feels like to be thirsty, the authors state.<\/p>\n<p>That distinction is not only philosophical, the authors state, but also has measurable consequences for how these systems perform and behave. In one illustration from the paper, researchers showed several leading AI models a simple image: a small number of dots arranged to suggest a human figure in motion, which is a well-established perceptual test known as a point-light display that even newborns can recognize as human.<\/p>\n<p>Several models failed to identify the figure as a person, with one describing it instead as a constellation of stars. When the same image was rotated just 20 degrees, even the best-performing models broke down.<\/p>\n<p>Humans don\u2019t fail this test because human perception is anchored to a lifetime of bodily experience that they have moving as acting agents in the world. AI systems, trained on vast libraries of text and images but with no bodily experience, are pattern-matching without that anchor, the study authors state.<\/p>\n<p>Two kinds of \u2018embodiment\u2019<\/p>\n<p>The paper draws a distinction that has not previously been made explicit in AI research. It defines \u201cexternal embodiment\u201d as a system\u2019s ability to interact with the physical world, to perceive its environment, plan actions and respond to real-world feedback, which is an important focus in current multimodal AI models. Internal embodiment, however, has not been implemented in these models. The authors define this as the continuous monitoring of one\u2019s own internal states, the biological equivalent of knowing you are tired, uncertain or in need.<\/p>\n<p>Humans regulate these internal states constantly and automatically using the body\u2019s organs, hormones and nervous system. Humans use that information not just to maintain physical health, but to shape attention, memory, emotion and social behavior.<\/p>\n<p>\u201cBy contrast, current AI systems have no equivalent mechanism. They process inputs and generate outputs without any persistent internal state that regulates how they behave over time,\u201d said\u00a0Dr. Marco Iacoboni, professor in the Department of Psychiatry and Biobehavioral Sciences at the David Geffen School of Medicine and a senior author on the paper.<\/p>\n<p>\u201cThis is not just a performance limitation, but also a safety limitation. Without internal costs or constraints, an AI system has no intrinsic reason to avoid overconfident errors, resist manipulation or behave consistently.\u201d<\/p>\n<p>What comes next<\/p>\n<p>The authors state the paper is meant to guide future research as AI technology develops. The authors propose what they call a \u201cdual-embodiment framework,\u201d or a set of principles for building AI systems that model both their interactions with the external world and their own internal states.<\/p>\n<p>These internal state variables would not need to replicate human biology directly but would function as \u00a0persistent signals tracking things like uncertainty, processing load and confidence that could shape the system\u2019s outputs and constrain its behavior over time.<\/p>\n<p>The authors also propose a new class of tests, or benchmarks, designed to measure a system\u2019s internal embodiment. Existing AI benchmarks focus almost exclusively on external performance such as ifthe system can navigate a space, identify an object complete a task.<\/p>\n<p>The UCLA researchers argue the field needs evaluations that probe whether a system can monitor its own internal states, maintain stability when those states are disrupted and behave pro-socially in ways that emerge from shared internal representations rather than statistical mimicry.<\/p>\n<p>\u201cWhat this work does is bring that insight directly to bear on AI development,\u201d Iacoboni said. \u201cIf we want AI systems that are genuinely aligned with human behavior \u2014 not just superficially fluent \u2014 we may need to give them vulnerabilities and checks that function like internal self-regulators.\u201d<\/p>\n<p>Key Questions Answered:Q: Why does an AI need to feel \u201cthirsty\u201d to tell me where the nearest water fountain is?<\/p>\n<p class=\"schema-faq-answer\">A: It\u2019s about the anchor of experience. Because you know what thirst feels like, your brain prioritizes water-seeking behavior in a way that is consistent and survival-oriented. For an AI, \u201cwater\u201d is just a statistical token. Without an internal state to regulate its \u201cdesire\u201d or \u201curgency,\u201d its advice can be inconsistent or dangerously overconfident because it doesn\u2019t \u201ccare\u201d about the outcome.<\/p>\n<p>Q: What was the \u201cPoint-Light\u201d test, and why did the AI fail it?<\/p>\n<p class=\"schema-faq-answer\">A: A point-light display is just a few dots moving like a human walking. Humans see the \u201cperson\u201d immediately because we have spent our lives moving our own bodies. AI models, trained only on static images and text, lack that \u201cbodily anchor.\u201d They see the dots as math, not as a reflection of a physical being, which is why rotating the image by just 20 degrees made the models break down completely.<\/p>\n<p>Q: Are we trying to give AI \u201cfeelings\u201d or just \u201cfeedback loops\u201d?<\/p>\n<p class=\"schema-faq-answer\">A: The researchers call them \u201cfunctional analogues.\u201d They don\u2019t need to feel \u201csad,\u201d but they do need a persistent internal signal that says, \u201cI am currently at 90% processing capacity and my confidence in this answer is low.\u201d In humans, those signals prevent us from making reckless decisions; in AI, they could serve as the ultimate \u201ckill switch\u201d for misinformation.<\/p>\n<p>Editorial Notes:This article was edited by a Neuroscience News editor.Journal paper reviewed in full.Additional context added by our staff.About this AI and neurotech research news<\/p>\n<p class=\"has-background\" style=\"background-color:#ffffe8\">Author:\u00a0<a href=\"http:\/\/neurosciencenews.com\/cdn-cgi\/l\/email-protection#cabda2a5bfb9bea5a48aa7afaea4afbee4bfa9a6abe4afaebf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Will Houston<\/a><br \/>Source:\u00a0<a href=\"https:\/\/ucla.edu\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">UCLA<\/a><br \/>Contact:\u00a0Will Houston \u2013 UCLA<br \/>Image:\u00a0The image is credited to Neuroscience News<\/p>\n<p class=\"has-background\" style=\"background-color:#ffffe8\">Original Research:\u00a0Open access.<br \/>\u201c<a href=\"https:\/\/doi.org\/10.1016\/j.neuron.2026.03.004\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Embodiment in multimodal large language models<\/a>\u201d by Akila Kadambi, Lisa Aziz-Zadeh, Antonio Damasio, Marco Iacoboni, and Srini Narayanan.\u00a0Neuron<br \/>DOI:10.1016\/j.neuron.2026.03.004<\/p>\n<p>Abstract<\/p>\n<p>Embodiment in multimodal large language models<\/p>\n<p>Multimodal large language models (MLLMs) have demonstrated an extraordinary capacity to bridge textual and visual inputs. Nonetheless, MLLMs still face limitations in situated physical and social interactions in sensorially rich and multimodal real-world settings, where the embodied experience of a living organism appears fundamental.<\/p>\n<p>We suggest that the next frontiers for MLLM development require the incorporation of both internal and external embodiment\u2014modeling not only external interactions with the world but also internal states and drives.<\/p>\n<p>Here, we describe mechanisms of internal and external embodiment in humans and relate these to current advances in MLLMs in the early stages of aligning to human representations.<\/p>\n<p>Our dual-embodied framework proposes to model interactions between these forms of embodiment in MLLMs so as to bridge the gap between multimodal data and world experience.<\/p>\n","protected":false},"excerpt":{"rendered":"Summary: When you reach for a saltshaker, your brain isn\u2019t just calculating coordinates; it\u2019s listening to your body\u2019s&hellip;\n","protected":false},"author":2,"featured_media":382942,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,44005,218,219,170782,61,170783,170784,60,18317,4282,87,44998,10116,80,245],"class_list":{"0":"post-382941","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-safety","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-human-ai-alignment","13":"tag-ie","14":"tag-internal-embodiment","15":"tag-interoception","16":"tag-ireland","17":"tag-multimodal-ai","18":"tag-neurobiology","19":"tag-neuroscience","20":"tag-neurotech","21":"tag-robotics","22":"tag-technology","23":"tag-ucla"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/382941","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=382941"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/382941\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/382942"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=382941"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=382941"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=382941"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}