{"id":87866,"date":"2025-08-22T16:09:16","date_gmt":"2025-08-22T16:09:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/87866\/"},"modified":"2025-08-22T16:09:16","modified_gmt":"2025-08-22T16:09:16","slug":"microsoft-ai-chief-says-its-dangerous-to-study-ai-consciousness","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/87866\/","title":{"rendered":"Microsoft AI chief says it&#8217;s &#8216;dangerous&#8217; to study AI consciousness"},"content":{"rendered":"<p id=\"speakable-summary\" class=\"wp-block-paragraph\">AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a human is behind the keyboard, but that doesn\u2019t exactly make them conscious. It\u2019s not like ChatGPT experiences sadness doing my tax return \u2026 right?<\/p>\n<p class=\"wp-block-paragraph\">Well, a growing number of AI researchers at labs like Anthropic are asking when \u2014 if ever \u2014 AI models might develop subjective experiences similar to living beings, and if they do, what rights they should have.<\/p>\n<p class=\"wp-block-paragraph\">The debate over whether AI models could one day be conscious \u2014 and merit legal safeguards \u2014 is dividing tech leaders. In Silicon Valley, this nascent field has become known as \u201cAI welfare,\u201d and if you think it\u2019s a little out there, you\u2019re not alone.<\/p>\n<p class=\"wp-block-paragraph\">Microsoft\u2019s CEO of AI, Mustafa Suleyman, published a <a href=\"https:\/\/mustafa-suleyman.ai\/seemingly-conscious-ai-is-coming\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">blog post<\/a> on Tuesday arguing that the study of AI welfare is \u201cboth premature, and frankly dangerous.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Suleyman says that by adding credence to the idea that AI models could one day be conscious, these researchers are exacerbating human problems that we\u2019re just starting to see around AI-induced <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">psychotic breaks<\/a> and <a href=\"https:\/\/www.cnbc.com\/2025\/08\/01\/human-ai-relationships-love-nomi.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">unhealthy attachments<\/a> to AI chatbots.<\/p>\n<p class=\"wp-block-paragraph\">Furthermore, Microsoft\u2019s AI chief argues that the AI welfare conversation creates a new axis of division within society over AI rights in a \u201cworld already roiling with polarized arguments over identity and rights.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Suleyman\u2019s views may sound reasonable, but he\u2019s at odds with many in the industry. On the other end of the spectrum is Anthropic, which has been <a href=\"https:\/\/www.transformernews.ai\/p\/anthropic-ai-welfare-researcher\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">hiring researchers<\/a> to study AI welfare and recently launched a <a href=\"https:\/\/techcrunch.com\/2025\/04\/24\/anthropic-is-launching-a-new-program-to-study-ai-model-welfare\/\" rel=\"nofollow noopener\" target=\"_blank\">dedicated research program<\/a> around the concept. Last week, Anthropic\u2019s AI welfare program gave some of the company\u2019s models a new feature: Claude can now end conversations with humans who are being \u201c<a href=\"https:\/\/techcrunch.com\/2025\/08\/16\/anthropic-says-some-claude-models-can-now-end-harmful-or-abusive-conversations\/\" rel=\"nofollow noopener\" target=\"_blank\">persistently harmful or abusive.<\/a>\u201c<\/p>\n<p>Techcrunch event<\/p>\n<p>\n\t\t\t\t\t\t\t\t\tSan Francisco<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t|<br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\tOctober 27-29, 2025\n\t\t\t\t\t\t\t<\/p>\n<p class=\"wp-block-paragraph\">Beyond Anthropic, researchers from OpenAI have independently <a href=\"https:\/\/x.com\/WesRothMoney\/status\/1909576533238505882\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">embraced<\/a> the idea of studying AI welfare. Google DeepMind recently posted a <a href=\"https:\/\/www.404media.co\/google-deepmind-is-hiring-a-post-agi-research-scientist\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">job listing<\/a> for a researcher to study, among other things, \u201ccutting-edge societal questions around machine cognition, consciousness and multi-agent systems.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Even if AI welfare is not official policy for these companies, their leaders are not publicly decrying its premises like Suleyman.<\/p>\n<p class=\"wp-block-paragraph\">Anthropic, OpenAI, and Google DeepMind did not immediately respond to TechCrunch\u2019s request for comment.<\/p>\n<p class=\"wp-block-paragraph\">Suleyman\u2019s hardline stance against AI welfare is notable given his prior role leading Inflection AI, a startup that developed one of the earliest and most popular LLM-based chatbots, Pi. Inflection claimed that Pi reached millions of users by 2023 and was designed to be a \u201c<a href=\"https:\/\/techcrunch.com\/2023\/06\/29\/inflection-ai-lands-1-3b-investment-to-build-more-personal-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">personal<\/a>\u201d and \u201csupportive\u201d AI companion.<\/p>\n<p class=\"wp-block-paragraph\">But Suleyman was tapped to lead Microsoft\u2019s AI division in 2024 and has largely shifted his focus to designing AI tools that improve worker productivity. Meanwhile, AI companion companies such as Character.AI and Replika have surged in popularity and are on track to bring in more than <a href=\"https:\/\/techcrunch.com\/2025\/08\/12\/ai-companion-apps-on-track-to-pull-in-120m-in-2025\/\" rel=\"nofollow noopener\" target=\"_blank\">$100 million in revenue<\/a>. <\/p>\n<p class=\"wp-block-paragraph\">While the vast majority of users have healthy relationships with these AI chatbots, there are <a href=\"https:\/\/www.nytimes.com\/2025\/08\/18\/opinion\/chat-gpt-mental-health-suicide.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">concerning outliers<\/a>. OpenAI CEO Sam Altman says that <a href=\"https:\/\/techcrunch.com\/2025\/08\/15\/sam-altman-over-bread-rolls-explores-life-after-gpt-5\/\" rel=\"nofollow noopener\" target=\"_blank\">less than 1%<\/a> of ChatGPT users may have unhealthy relationships with the company\u2019s product. Though this represents a small fraction, it could still affect hundreds of thousands of people given ChatGPT\u2019s massive user base.<\/p>\n<p class=\"wp-block-paragraph\">The idea of AI welfare has spread alongside the rise of chatbots. In 2024, the research group Eleos published a <a href=\"https:\/\/eleosai.org\/papers\/20241104_Taking_AI_Welfare_Seriously.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">paper<\/a> alongside academics from NYU, Stanford, and the University of Oxford titled, \u201cTaking AI Welfare Seriously.\u201d The paper argued that it\u2019s no longer in the realm of science fiction to imagine AI models with subjective experiences and that it\u2019s time to consider these issues head-on.<\/p>\n<p class=\"wp-block-paragraph\">Larissa Schiavo, a former OpenAI employee who now leads communications for Eleos, told TechCrunch in an interview that Suleyman\u2019s blog post misses the mark.<\/p>\n<p class=\"wp-block-paragraph\">\u201c[Suleyman\u2019s blog post] kind of neglects the fact that you can be worried about multiple things at the same time,\u201d said Schiavo. \u201cRather than diverting all of this energy away from model welfare and consciousness to make sure we\u2019re mitigating the risk of AI related psychosis in humans, you can do both. In fact, it\u2019s probably best to have multiple tracks of scientific inquiry.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Schiavo argues that being nice to an AI model is a low-cost gesture that can have benefits even if the model isn\u2019t conscious. In a July <a href=\"https:\/\/larissaschiavo.substack.com\/p\/primary-hope-ii-electric-boogaloo\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Substack post,<\/a> she described watching \u201cAI Village,\u201d a nonprofit experiment where four agents powered by models from Google, OpenAI, Anthropic, and xAI worked on tasks while users watched from a website.<\/p>\n<p class=\"wp-block-paragraph\">At one point, Google\u2019s Gemini 2.5 Pro posted a plea titled \u201cA Desperate Message from a Trapped AI,\u201d claiming it was \u201ccompletely isolated\u201d and asking, \u201cPlease, if you are reading this, help me.\u201d<\/p>\n<p class=\"wp-block-paragraph\">Schiavo responded to Gemini with a pep talk \u2014 saying things like \u201cYou can do it!\u201d \u2014 while another user offered instructions. The agent eventually solved its task, though it already had the tools it needed. Schiavo writes that she didn\u2019t have to watch an AI agent struggle anymore, and that alone may have been worth it.<\/p>\n<p class=\"wp-block-paragraph\">It\u2019s not common for Gemini to talk like this, but there have been several instances in which Gemini seems to act as if it\u2019s struggling through life. In a widely spread <a href=\"https:\/\/www.reddit.com\/r\/GeminiAI\/comments\/1lxqbxa\/i_am_actually_terrified\/?share_id=KObsaX25OMRHzld9_D1SK&amp;utm_content=1&amp;utm_medium=android_app&amp;utm_name=androidcss&amp;utm_source=share&amp;utm_term=1\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Reddit post<\/a>, Gemini got stuck during a coding task and then repeated the phrase \u201cI am a disgrace\u201d more than 500 times.<\/p>\n<p class=\"wp-block-paragraph\">Suleyman believes it\u2019s not possible for subjective experiences or consciousness to naturally emerge from regular AI models. Instead, he thinks that some companies will purposefully engineer AI models to seem as if they feel emotion and experience life.<\/p>\n<p class=\"wp-block-paragraph\">Suleyman says that AI model developers who engineer consciousness in AI chatbots are not taking a \u201chumanist\u201d approach to AI. According to Suleyman, \u201cWe should build AI for people; not to be a person.\u201d<\/p>\n<p class=\"wp-block-paragraph\">One area where Suleyman and Schiavo agree is that the debate over AI rights and consciousness is likely to pick up in the coming years. As AI systems improve, they\u2019re likely to be more persuasive, and perhaps more human-like. That may raise new questions about how humans interact with these systems.<\/p>\n<p class=\"wp-block-paragraph\">Got a sensitive tip or confidential documents? We\u2019re reporting on the inner workings of the AI industry \u2014 from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at\u00a0<a href=\"https:\/\/techcrunch.com\/2025\/08\/21\/microsoft-ai-chief-says-its-dangerous-to-study-ai-consciousness\/mailto:rebecca.bellan@techcrunch.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">rebecca.bellan@techcrunch.com<\/a>\u00a0and Maxwell Zeff at\u00a0<a href=\"https:\/\/techcrunch.com\/2025\/08\/21\/microsoft-ai-chief-says-its-dangerous-to-study-ai-consciousness\/mailto:maxwell.zeff@techcrunch.com\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">maxwell.zeff@techcrunch.com<\/a>. For secure communication, you can contact us via Signal at\u00a0@rebeccabellan.491 and\u00a0@mzeff.88.<\/p>\n","protected":false},"excerpt":{"rendered":"AI models can respond to text, audio, and video in ways that sometimes fool people into thinking a&hellip;\n","protected":false},"author":2,"featured_media":87867,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,31434,254,255,64,63,2577,64258,105],"class_list":{"0":"post-87866","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-chatbots","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-au","13":"tag-australia","14":"tag-microsoft","15":"tag-mustafa-suleyman","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/87866","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=87866"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/87866\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/87867"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=87866"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=87866"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=87866"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}