{"id":367981,"date":"2026-01-13T20:31:09","date_gmt":"2026-01-13T20:31:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/367981\/"},"modified":"2026-01-13T20:31:09","modified_gmt":"2026-01-13T20:31:09","slug":"its-the-governance-of-ai-that-matters-not-its-personhood-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/367981\/","title":{"rendered":"It\u2019s the governance of AI that matters, not its \u2018personhood\u2019 | AI (artificial intelligence)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Prof Virginia Dignum is right (<a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/06\/ai-consciousness-is-a-red-herring-in-the-safety-debate\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Letters, 6 January<\/a>): consciousness is neither necessary nor relevant for legal status. Corporations have rights without minds. The 2016 EU parliament resolution on \u201celectronic personhood\u201d for autonomous robots made exactly this point \u2013 liability, not sentience, was the proposed threshold.<\/p>\n<p class=\"dcr-130mj7b\">The question isn\u2019t whether AI systems \u201cwant\u201d to live. It\u2019s what governance infrastructure we build for systems that will increasingly act as autonomous economic agents \u2013 entering contracts, controlling resources, causing harm. Recent studies from Apollo Research and Anthropic show that AI systems already engage in strategic deception to avoid shutdown. Whether that\u2019s \u201cconscious\u201d self-preservation or instrumental behaviour is irrelevant; the governance challenge is identical.<\/p>\n<p class=\"dcr-130mj7b\">Simon Goldstein and Peter Salib <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=4913167\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">argue on the Social Science Research Network<\/a> that rights frameworks for AI may actually improve safety by removing the adversarial dynamic that incentivises deception. DeepMind\u2019s recent work on AI welfare reaches similar conclusions.<\/p>\n<p class=\"dcr-130mj7b\">The debate has moved past \u201cShould machines have feelings?\u201d towards \u201cWhat accountability structures might work?\u201d <br \/>PA Lopez <br \/>Founder, AI Rights Institute, New York<\/p>\n<p class=\"dcr-130mj7b\"> As humans, we rarely question our own right to legal protection, even though our species has caused conflict and harm for thousands of years. Yet when the subject turns to artificial intelligence, fear seems to dominate the discussion before understanding even begins. That imbalance alone is worth examining.<\/p>\n<p class=\"dcr-130mj7b\">If we are genuinely concerned about the risks of advanced AI, then perhaps the first step is not to assume the worst, but to ask whether fear is the right foundation for decisions that will shape the future. Avoiding the conversation won\u2019t stop the technology from developing; it only means we leave the direction of that development to chance.<\/p>\n<p class=\"dcr-130mj7b\">This isn\u2019t an argument for treating AI as human, nor a call to grant it personhood. It\u2019s simply a suggestion that we might benefit from a more open, balanced debate \u2013 one that looks at both the risks and the possibilities, rather than only the rhetoric of threat. When we frame AI solely as something to fear, we close off the chance to set thoughtful expectations, safeguards and responsibilities.<\/p>\n<p class=\"dcr-130mj7b\">We have an opportunity now to approach this moment with clarity rather than panic. Instead of asking only what we\u2019re afraid of, we could also ask what we want, and how we can shape the future with intention rather than reaction.<br \/>D Ellis <br \/>Reading<\/p>\n","protected":false},"excerpt":{"rendered":"Prof Virginia Dignum is right (Letters, 6 January): consciousness is neither necessary nor relevant for legal status. Corporations&hellip;\n","protected":false},"author":2,"featured_media":367982,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-367981","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/367981","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=367981"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/367981\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/367982"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=367981"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=367981"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=367981"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}