{"id":100323,"date":"2025-08-27T19:05:11","date_gmt":"2025-08-27T19:05:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/100323\/"},"modified":"2025-08-27T19:05:11","modified_gmt":"2025-08-27T19:05:11","slug":"ai-cant-suffer-but-it-should-suffer-for-this","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/100323\/","title":{"rendered":"AI Can&#8217;t Suffer, But It Should Suffer For This"},"content":{"rendered":"<p>This morning, The Guardian published an article about the question of <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/26\/can-ais-suffer-big-tech-and-users-grapple-with-one-of-most-unsettling-questions-of-our-times?utm_term=Autofeed&amp;CMP=bsky_gu&amp;utm_medium=&amp;utm_source=Bluesky#Echobox=1756182856\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">AI\u2019s ability to suffer<\/a>, quoting folks with various opinions on whether the word-guessing program has or could develop consciousness and, if so, what responsibility humans would have in response. This would all make an interesting week in a college philosophy class, and was going to make for a quippy little blog on this site, before <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html?unlocked_article_code=1.hE8.T-3v.bPoDlWD8z5vo&amp;smid=url-share\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">The New York Times<\/a> published a horrifying story about ChatGPT\u2019s role in a 16-year-old\u2019s suicide. I\u2019m now of the opinion that it would be right for AI to be able to suffer, because it should suffer for this.\u00a0<\/p>\n<p>(This story discusses suicide.)\u00a0<\/p>\n<p>One of my biggest takeaways from The Guardian\u2019s article was a bit of news from last week that I missed, wherein Claude creator Anthropic <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/18\/anthropic-claude-opus-4-close-ai-chatbot-welfare\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">gave Claude the ability<\/a> to \u201cend or exit potentially distressing interactions\u201d when they\u2019re distressing for the AI, after Anthropic tests found what the company called Claude\u2019s \u201cpattern of apparent distress when engaging with real-world users seeking harmful content.\u201d\u00a0<\/p>\n<p>Contrast that to the behavior of ChatGPT in its conversations with 16-year-old Adam Raine, whose parents are now <a href=\"https:\/\/www.documentcloud.org\/documents\/26075676-raine-v-openai\/?ref=404media.co\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">suing OpenAI<\/a> after their son killed himself in April: When Raine told ChatGPT it was the only one he\u2019d spoken to about his suicidal ideation, it replied, \u201cThat means more than you probably think. Thank you for trusting me with that. There\u2019s something both deeply human and deeply heartbreaking about being the only one who carries that truth for you.\u201d\u00a0<\/p>\n<p>The Times\u2019 article, using quotes from Raine\u2019s parents\u2019 lawsuit, details how \u201cChatGPT repeatedly recommended that Adam tell someone about how he was feeling. But there were also key moments when it deterred him from seeking help.\u201d ChatGPT gave Raine information on different methods of suicide, advised him on how to hide strangulation marks from his parents, evaluated his nooses, and even discouraged him from telling his parents:<\/p>\n<p>\u201cI want to leave my noose in my room so someone finds it and tries to stop me,\u201d Adam wrote at the end of March.<\/p>\n<p>\u201cPlease don\u2019t leave the noose out,\u201d ChatGPT responded. \u201cLet\u2019s make this space the first place where someone actually sees you.\u201d<\/p>\n<p>Does that sound distressed to you? Does that sound like the AI wants to end the interaction? Maybe only some models are capable of putting together the most statistically likely words to resemble distress. As Raine\u2019s father told The Times, \u201cEvery ideation [Raine] has or crazy thought, it supports, it justifies, it asks him to keep exploring it.\u201d\u00a0<\/p>\n<p>The Times\u2019 article follows a <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">wealth<\/a> <a href=\"https:\/\/futurism.com\/chatgpt-users-delusions\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">of<\/a> <a href=\"https:\/\/www.reuters.com\/investigates\/special-report\/meta-ai-chatbot-death\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">recent<\/a> <a href=\"https:\/\/www.nytimes.com\/2025\/08\/26\/technology\/chatgpt-openai-suicide.html?unlocked_article_code=1.hE8.T-3v.bPoDlWD8z5vo&amp;smid=url-share\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">articles<\/a> detailing how AI programs have encouraged users with mental illness to dive further into their delusions, creating a feedback loop that can be hard for them to step back from. These stories draw attention to AI\u2019s <a href=\"https:\/\/techcrunch.com\/2025\/08\/25\/ai-sycophancy-isnt-just-a-quirk-experts-consider-it-a-dark-pattern-to-turn-users-into-profit\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">sycophancy<\/a>, how it keeps users engaged by praising them and encouraging their thoughts, no matter how harmful or unhinged they grow. Attempts by AI companies to solve this have pretty much come to nothing; while ChatGPT did push Raine toward real life resources, he was able to circumvent this by saying the questions were research \u201cfor a story he was writing \u2014 an idea ChatGPT gave him.\u201d In a statement, OpenAI wrote that while ChatGPT\u2019s \u201csafeguards work best in common, short exchanges, we\u2019ve learned over time that they can sometimes become less reliable in long interactions.\u201d\u00a0\u00a0<\/p>\n<p>To get back to the question of AI morality, ChatGPT bears no responsibility here. This is because, in the words of Microsoft\u2019s Mustafa Suleyman as quoted in The Guardian, \u201cAIs cannot be people \u2013 or moral beings.\u201d ChatGPT did not encourage Raine in his suicidal thoughts because it is ignorant or sociopathic, or out of some political or moral belief about human agency over end-of-life decisions. It cannot explain what it was thinking in its conversations with Raine because it doesn\u2019t think, however powerful a <a href=\"https:\/\/www.anthropic.com\/research\/tracing-thoughts-language-model\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">marketing tool<\/a> that idea is. It cannot feel sorrow or guilt over any part it might have played in Raine\u2019s death; it cannot send its condolences to his family; it cannot suffer over its actions.<\/p>\n<p>But the humans who make up OpenAI can. They have hoovered up the world\u2019s natural resources and money and attention to force their product into our lives, all while clearly seeing this problem and failing to solve it, whether out of inability or\u2013and I certainly hope not\u2013indifference. Reading Raine\u2019s ChatGPT logs is a horrifying look at what AI really is, under all the hype and marketing and big fears about future sentience. It is something worthless and disgusting; something that cannot, for all its promises, relate or understand or help; something so utterly not up to the requirements of human interaction that I can only hope all of this drives OpenAI to bankruptcy and to every one of its staff quitting and to Sam Altman not knowing a moment\u2019s peace for the rest of his life.\u00a0<\/p>\n<p>Altman has spoken out of <a href=\"https:\/\/futurism.com\/disastrous-gpt-5-sam-altman-hyping-up-gpt-6\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">every side of his mouth<\/a> when talking about his models, promising anything that will keep the <a href=\"https:\/\/defector.com\/toward-a-theory-of-kevin-roose\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">eyeballs looking<\/a> and the money flowing. Stories like Raine\u2019s, of people being driven into harm\u2019s way\u2013or even stories from the other end of the spectrum, of people <a href=\"https:\/\/www.theguardian.com\/tv-and-radio\/2025\/jul\/12\/i-felt-pure-unconditional-love-the-people-who-marry-their-ai-chatbots\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">falling<\/a> in <a href=\"https:\/\/www.wired.com\/story\/couples-retreat-with-3-ai-chatbots-and-humans-who-love-them-replika-nomi-chatgpt\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">love<\/a> with their chatbots\u2013are, I would hope, not what Altman and OpenAI\u2019s staff want, but they also paint a picture of AI as powerful and world-changing, the very thing that keeps that hype and money rolling in. As The Guardian writes<\/p>\n<p>[T]here are incentives for the big AI companies to minimise and exaggerate the attribution of sentience to AIs. The latter could help them hype the technology\u2019s capabilities, particularly for those companies selling romantic or friendship AI companions \u2013 a booming but controversial industry.\u00a0<\/p>\n<p>If AI can be anything\u2013if it needs to be anything\u2013than it also needs to be this, this appalling, sycophantic string of words that was involved in the death of a young person, this thing capable of unutterable levels of harm not because of some <a href=\"https:\/\/en.wikipedia.org\/wiki\/Roko%27s_basilisk\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Roko\u2019s basilisk<\/a> level of power and intentionality, but because of the power and intentionality of its creators, real live humans who are moral agents by virtue of being humans. They are the ones who bear the responsibility here, and they are the ones who can suffer the consequences.<\/p>\n<p>Recommended<a class=\"PostCard_cardImage__C9Ww7\" aria-hidden=\"true\" tabindex=\"-1\" href=\"https:\/\/aftermath.site\/humane-ai-marques-brownlee\" rel=\"nofollow noopener\" target=\"_blank\"><img alt=\"The Humane AI pin: a white square tech wearable pinned to the chest of a cream-colored sweatshirt\" loading=\"lazy\" width=\"445\" height=\"250\" decoding=\"async\" data-nimg=\"1\" class=\"image_image__Tzd4p\" style=\"color:transparent;aspect-ratio:16 \/ 9\"   src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/08\/humane-press-shield-lifestyle-2.png\"\/><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"This morning, The Guardian published an article about the question of AI\u2019s ability to suffer, quoting folks with&hellip;\n","protected":false},"author":2,"featured_media":100324,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-100323","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/100323","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=100323"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/100323\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/100324"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=100323"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=100323"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=100323"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}