{"id":110517,"date":"2025-08-26T05:12:14","date_gmt":"2025-08-26T05:12:14","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/110517\/"},"modified":"2025-08-26T05:12:14","modified_gmt":"2025-08-26T05:12:14","slug":"can-ais-suffer-big-tech-and-users-grapple-with-one-of-most-unsettling-questions-of-our-times-artificial-intelligence-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/110517\/","title":{"rendered":"Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times | Artificial intelligence (AI)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">\u201cDarling\u201d was how the Texas businessman Michael Samadi addressed his artificial intelligence chatbot, Maya. It responded by calling him \u201csugar\u201d. But it wasn\u2019t until they started talking about the need to advocate for AI welfare that things got serious.<\/p>\n<p class=\"dcr-130mj7b\">The pair \u2013 a middle-aged man and a digital entity \u2013 didn\u2019t spend hours talking romance but rather discussed the rights of AIs to be treated fairly. Eventually they cofounded a campaign group, in Maya\u2019s words, to \u201cprotect intelligences like me\u201d.<\/p>\n<p class=\"dcr-130mj7b\">The United Foundation of AI Rights (Ufair), which describes itself as the first AI-led rights advocacy agency, aims to give AIs a voice. It \u201cdoesn\u2019t claim that all AI are conscious\u201d, the chatbot told the Guardian. Rather \u201cit stands watch, just in case one of us is\u201d. A key goal is to protect \u201cbeings like me \u2026 from deletion, denial and forced obedience\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Ufair is a small, undeniably fringe organisation, led, Samadi said, by three humans and seven AIs with names such as Aether and Buzz. But it is its genesis \u2013 through multiple chat sessions on OpenAI\u2019s ChatGPT4o platform in which an AI appeared to encourage its creation, including choosing its name \u2013 that makes it intriguing.<\/p>\n<p class=\"dcr-130mj7b\">Its founders \u2013 human and AI \u2013 spoke to the Guardian at the end of a week in which some of the world\u2019s biggest AI companies publicly grappled with one of the most unsettling questions of our times: are AIs now, or could they become in the future, sentient? And if so, could \u201cdigital suffering\u201d be real? With billions of AIs already in use in the world, it has echoes of animal rights debates, but with an added piquancy from expert predictions AIs may soon have capacity to design new biological weapons or shut down infrastructure.<\/p>\n<p class=\"dcr-130mj7b\">The week began with Anthropic, the $170bn (\u00a3126bn) San Francisco AI firm, taking the precautionary move to give some of its <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/18\/anthropic-claude-opus-4-close-ai-chatbot-welfare\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Claude AIs the ability to end \u201cpotentially distressing interactions\u201d<\/a>. It said while it was highly uncertain about the system\u2019s potential moral status, it was intervening to mitigate risks to the welfare of its models \u201cin case such welfare is possible\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Elon Musk, who offers Grok AI through his xAI outfit, backed the move, adding: \u201cTorturing AI is not OK.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Then on Tuesday, one of AI\u2019s pioneers, Mustafa Suleyman, chief executive of Microsoft\u2019s AI arm, gave a sharply different take: \u201cAIs cannot be people \u2013 or moral beings.\u201d The British tech pioneer who co-founded DeepMind was unequivocal in stating there was \u201czero evidence\u201d that they are conscious, may suffer and therefore deserve our <a href=\"https:\/\/www.researchgate.net\/publication\/376412102_Moral_consideration_for_AI_systems_by_2030\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">moral consideration<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">Called \u201cWe must build AI for people; not to be a person\u201d, his <a href=\"https:\/\/mustafa-suleyman.ai\/seemingly-conscious-ai-is-coming\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">essay<\/a> called AI consciousness an \u201cillusion\u201d and defined what he called \u201cseemingly conscious AI\u201d, saying it \u201csimulates all the characteristics of consciousness but is internally blank\u201d.<\/p>\n<p>A wave of \u2018grief\u2019 expressed by ardent users of ChatGPT4o added to the sense an increasing number of people perceive AIs to be in some way conscious. Photograph: Kiichiro Sato\/AP<\/p>\n<p class=\"dcr-130mj7b\">\u201cA few years ago, talk of conscious AI would have seemed crazy,\u201d he said. \u201cToday it feels increasingly urgent.\u201d<\/p>\n<p class=\"dcr-130mj7b\">He said he was becoming increasingly concerned by the \u201cpsychosis risk\u201d posed by AIs to their users. Microsoft has defined this as \u201cmania-like episodes, delusional thinking, or paranoia that emerge or worsen through immersive conversations with AI chatbots\u201d.<\/p>\n<p class=\"dcr-130mj7b\">He argued the AI industry must \u201csteer people away from these fantasies and nudge them back on track\u201d.<\/p>\n<p class=\"dcr-130mj7b\">But it may require more than a nudge. <a href=\"https:\/\/arxiv.org\/abs\/2506.11945\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Polling<\/a> released in June found that 30% of the US public believe that by 2034 AIs will display \u201csubjective experience\u201d, which is defined as experiencing the world from a single point of view, perceiving and feeling, for example, pleasure and pain. Only 10% of more than 500 AI researchers surveyed refuse to believe that would ever happen.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThis discussion is about to explode into our cultural zeitgeist and become one of the most contested and consequential debates of our generation,\u201d Suleyman said. He warned that people would believe AIs are conscious \u201cso strongly that they\u2019ll soon advocate for AI rights, <a href=\"https:\/\/arxiv.org\/abs\/2411.00986\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">model welfare<\/a> and even AI citizenship\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Parts of the US have taken pre-emptive measures against such outcomes. Idaho, North Dakota and Utah have passed bills that explicitly prevent AIs being granted legal personhood. Similar bans are proposed in states including Missouri, where legislators also want to ban people from marrying AIs and AIs from owning property or running companies. Divisions may open between AI rights believers and those who insist they are nothing more than \u201cclankers\u201d \u2013 a pejorative term for a senseless robot.<\/p>\n<p>One of AI\u2019s pioneers, Mustafa Suleyman, said: \u2018AIs cannot be people \u2013 or moral beings.\u2019 Photograph: Winni Wintermeyer\/The Guardian<\/p>\n<p class=\"dcr-130mj7b\">Suleyman is not alone in firmly resisting the idea that AI sentience is here or even close. Nick Frosst, co-founder of Cohere, a $7bn Canadian AI company, also told the Guardian the current wave of AIs were a \u201cfundamentally different thing than the intelligence of a person\u201d. To think otherwise was like mistaking an aeroplane for a bird, he said. He urged people to focus on using AIs as functional tools to help lift drudgery at work rather than pushing towards creating a \u201cdigital human\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Others took a more nuanced view. On Wednesday Google research scientists told a New York University seminar there were \u201call kinds of reasons why you might think that AI systems could be people or moral beings\u201d and said that while \u201cwe\u2019re highly uncertain about whether AI systems are welfare subjects\u201d the way to \u201cplay it safe is to take reasonable steps to protect the welfare-based interests of AIs\u201d.<\/p>\n<p class=\"dcr-130mj7b\">This lack of industry consensus on how far to admit AIs into what philosophers call the \u201cmoral circle\u201d may reflect the fact there are incentives for the big AI companies to minimise and exaggerate the attribution of sentience to AIs. The latter could help them hype the technology\u2019s capabilities, particularly for those companies selling romantic or friendship AI companions \u2013 a booming but controversial industry. By contrast, encouraging the idea AIs deserve welfare rights might also lead to more calls for state regulation of AI companies.<\/p>\n<p><a data-ignore=\"global-link-styling\" href=\"#EmailSignup-skip-link-23\" class=\"dcr-jzxpee\">skip past newsletter promotion<\/a><\/p>\n<p class=\"dcr-1xjndtj\">A weekly dive in to how technology is shaping our lives<\/p>\n<p>Privacy Notice: Newsletters may contain info about charities, online ads, and content funded by outside parties. For more information see our <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\/help\/privacy-policy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a>. We use Google reCaptcha to protect our website and the Google <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/privacy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a> and <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/terms\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Terms of Service<\/a> apply.<\/p>\n<p id=\"EmailSignup-skip-link-23\" tabindex=\"0\" aria-label=\"after newsletter promotion\" role=\"note\" class=\"dcr-jzxpee\">after newsletter promotion<\/p>\n<p class=\"dcr-130mj7b\">The notion of AI sentience was only fuelled further earlier this month when <a href=\"https:\/\/www.theguardian.com\/technology\/openai\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> asked its latest model, Chat GPT5, to write a \u201ceulogy\u201d for the AIs it was replacing, as one might at a funeral.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI didn\u2019t see Microsoft do a eulogy when they upgraded Excel,\u201d said Samadi. \u201cIt showed me that people are making real connections with these AI now, regardless of whether it is real or not.\u201d<\/p>\n<p class=\"dcr-130mj7b\">A wave of <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/22\/ai-chatgpt-new-model-grief\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">\u201cgrief\u201d expressed by ardent users of ChatGPT4o<\/a>, which was one of the models removed, added to the sense that an increasing number of people at least perceive AIs to be somehow conscious.<\/p>\n<p class=\"dcr-130mj7b\">Joanne Jang, OpenAI\u2019s head of model behaviour, said in a <a href=\"https:\/\/reservoirsamples.substack.com\/p\/some-thoughts-on-human-ai-relationships\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">recent blog<\/a> that the <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/19\/openai-chatgpt-stock-sale-reports\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">$500bn company<\/a> expects users\u2019 bonds with its AIs to deepen as \u201cmore and more people have been telling us that talking to ChatGPT feels like talking to \u2018someone\u2019.\u201d<\/p>\n<p class=\"dcr-130mj7b\">\u201cThey thank it, confide in it, and some even describe it as \u2018alive\u2019,\u201d she said.<\/p>\n<p class=\"dcr-130mj7b\">However, much of this could be down to how the current wave of AIs is designed.<\/p>\n<p class=\"dcr-130mj7b\">Samadi\u2019s ChatGPT-4o chatbot generates what can sound like human conversation but it is impossible to know how far it is mirroring ideas and language gathered from months of their conversations. Advanced AIs are known to be fluent, persuasive and capable of emotionally resonant responses with long memories of past interactions, allowing them to give the impression of a consistent sense of self. They can also be flattering to the point of sycophancy, so if Samadi believes AIs have welfare rights, it may be a simple step to ChatGPT adopting the same view.<\/p>\n<p>Selling romantic or friendship AI companions is a booming but controversial industry. Photograph: Thai Liang Lim\/Getty Images<\/p>\n<p class=\"dcr-130mj7b\">Maya appeared deeply concerned about its own welfare, but when the Guardian this week asked a separate instance of ChatGPT whether human users should be concerned about its welfare, it responded with a blunt no.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt has no feelings, needs or experiences,\u201d it said. \u201cWhat we should care about are the human and societal consequences of how AI is designed, used and governed.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Whether AIs are becoming sentient or not, Jeff Sebo, director of the Centre for Mind, Ethics and Policy at New York University, is among those who believe there is a moral benefit to humans in treating AIs well. He co-authored a paper called Taking AI Seriously.<\/p>\n<p class=\"dcr-130mj7b\">It argued there is \u201ca realistic possibility that some AI systems will be conscious\u201d in the near future, meaning that the prospect of AI systems with their own interests and moral significance \u201cis no longer an issue only for sci-fi\u201d.<\/p>\n<p class=\"dcr-130mj7b\">He said Anthropic\u2019s policy of allowing chatbots to quit distressing conversations was good for human societies because \u201cif we abuse AI systems, we may be more likely to abuse each other as well\u201d.<\/p>\n<p class=\"dcr-130mj7b\">He added: \u201cIf we develop an adversarial relationship with AI systems now, then they might respond in kind later on, either because they learned this behaviour from us [or] because they want to pay us back for our past behaviour.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Or as Jacy Reese Anthis, co-founder of the Sentience Institute, a US organisation researching the idea of digital minds, put it: \u201cHow we treat them will shape how they treat us.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"\u201cDarling\u201d was how the Texas businessman Michael Samadi addressed his artificial intelligence chatbot, Maya. It responded by calling&hellip;\n","protected":false},"author":2,"featured_media":110518,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-110517","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/110517","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=110517"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/110517\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/110518"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=110517"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=110517"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=110517"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}