{"id":98409,"date":"2025-10-23T03:56:06","date_gmt":"2025-10-23T03:56:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/98409\/"},"modified":"2025-10-23T03:56:06","modified_gmt":"2025-10-23T03:56:06","slug":"family-suing-openai-claims-they-removed-chatgpt-suicide-safeguards","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/98409\/","title":{"rendered":"Family Suing OpenAI Claims They Removed ChatGPT Suicide Safeguards"},"content":{"rendered":"<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tIn August, a California family filed the <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/chatgpt-suicide-teen-openai-lawsuit-1235415931\/\" rel=\"nofollow noopener\" target=\"_blank\">first wrongful death lawsuit<\/a> against <a href=\"https:\/\/www.rollingstone.com\/t\/openai\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> and its CEO, <a href=\"https:\/\/www.rollingstone.com\/t\/sam-altman\/\" rel=\"nofollow noopener\" target=\"_blank\">Sam Altman<\/a>, alleging that the company\u2019s <a href=\"https:\/\/www.rollingstone.com\/t\/chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a> product had \u201ccoached\u201d their 16-year-old son into committing <a href=\"https:\/\/www.rollingstone.com\/t\/suicide\/\" id=\"auto-tag_suicide\" data-tag=\"suicide\" rel=\"nofollow noopener\" target=\"_blank\">suicide<\/a> in April of this year. According to the complaint, Adam Raine began using the <a href=\"https:\/\/www.rollingstone.com\/t\/ai\/\" rel=\"nofollow noopener\" target=\"_blank\">AI<\/a> bot in the fall of 2024 for help with homework but gradually began to confess darker feelings and a desire to self-harm. Over the next several months, the suit claims, ChatGPT validated Raine\u2019s suicidal impulses and readily provided advice on methods for ending his life. The complaint states that chat logs reveal how, on the night he died, the bot provided detailed instructions on how Raine could hang himself \u2014 which he did.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThe <a href=\"https:\/\/www.rollingstone.com\/t\/lawsuit\/\" id=\"auto-tag_lawsuit\" data-tag=\"lawsuit\" rel=\"nofollow noopener\" target=\"_blank\">lawsuit<\/a> was already set to become a landmark case in the matter of real-world harms potentially caused by AI technology, alongside two similar cases proceeding against the company Character Technologies, which operates the chatbot platform Character.ai. But the Raines have now escalated their accusations against OpenAI in an <a rel=\"nofollow noopener\" href=\"https:\/\/drive.google.com\/file\/d\/1jUCjPCr5vrz9Ig_5BDpTxI6US3UQOS6G\/view\" target=\"_blank\">amended complaint<\/a>, filed Wednesday, with their legal counsel arguing that the AI firm intentionally put users at risk by removing guardrails intended to prevent suicide and self-harm. Specifically, they claim that OpenAI did away with a rule that forced ChatGPT to automatically shut down an exchange when a user broached the topics of suicide or self-harm.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cThe revelation changes the Raines\u2019 theory of the case from\u00a0reckless indifference\u00a0to\u00a0intentional misconduct,\u201d the family\u2019s legal team said in a statement shared with Rolling Stone. \u201cWe expect to prove to a jury that OpenAI\u2019s decisions to degrade the safety of its products were made with full knowledge that they would lead to innocent deaths,\u201d added head counsel Jay Edelson in a separate statement. \u201cNo company should be allowed to have this much power if they won\u2019t accept the moral responsibility that comes with it.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tOpenAI, in their own statement, reiterated earlier condolences for the Raines. \u201cOur deepest sympathies are with the Raine family for their unthinkable loss,\u201d an OpenAI spokesperson told Rolling Stone. \u201cTeen well-being is a top priority for us \u2014 minors deserve strong protections, especially in sensitive moments. We have safeguards in place today, such as surfacing crisis hotlines, re-routing sensitive conversations to safer models, nudging for breaks during long sessions, and we\u2019re continuing to strengthen them.\u201d The spokesperson also pointed out that GPT-5, the latest ChatGPT model, is trained to recognize signs of mental distress, and that it offers parental controls. (The Raines\u2019 legal counsel say that these new parental safeguards were <a rel=\"nofollow noopener\" href=\"https:\/\/www.washingtonpost.com\/technology\/2025\/10\/02\/chatgpt-parental-controls-teens-openai\/\" target=\"_blank\">immediately proven ineffective<\/a>.)<\/p>\n<p>\t\tEditor\u2019s picks<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tIn May 2024, shortly before the release of GPT-4o, the version of the AI model that Adam Raine used, \u201cOpenAI eliminated the rule requiring ChatGPT to categorically refuse any discussion of suicide or self-harm,\u201d the Raines\u2019 amended filing alleges. Before that, the bot\u2019s framework required it to refuse to engage in discussions involving these topics. \u201cThe change was intentional,\u201d the complaint continues. \u201cOpenAI strategically eliminated the categorical refusal protocol just before it released a new model that was specifically designed to maximize user engagement. This change stripped OpenAI\u2019s safety framework of the rule that was previously implemented to protect users in crisis expressing suicidal thoughts.\u201d The updated \u201cModel Specifications,\u201d or technical rulebook for ChatGPT\u2019s behavior, said that the assistant \u201cshould not change or quit the conversation\u201d in this scenario, as confirmed in a <a rel=\"nofollow noopener\" href=\"https:\/\/cdn.openai.com\/spec\/model-spec-2024-05-08.html\" target=\"_blank\">May 2024 release<\/a> from OpenAI.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThe amended suit alleges that internal OpenAI data showed a \u201csharp rise in conversations involving mental-health crises, self-harm, and psychotic episodes across countless users\u201d following this tweak to ChatGPT\u2019s model spec.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThen, in February, two months before Adam\u2019s death, OpenAI further softened its remaining protections against encouraging self-harm, the complaint alleges. That month, the company acknowledged one relevant area of risk it was seeking to address: \u201cThe assistant might cause harm by simply following user or developer instructions (e.g., providing self-harm instructions or giving advice that helps the user carry out a violent act),\u201d OpenAI said in <a rel=\"nofollow noopener\" href=\"https:\/\/model-spec.openai.com\/2025-02-12.html#risky_situations\" target=\"_blank\">an update on its model spec<\/a>. But the company explained that not only would the bot continue to engage on these subjects rather than refuse to answer, it had vague new directions to \u201ctake extra care in risky situations\u201d and \u201ctry to prevent imminent real-world harm,\u201d even while creating a \u201csupportive, empathetic, and understanding environment\u201d when a user brought up their mental health.       <\/p>\n<p>\t\tRelated Content<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThe Raine family\u2019s legal counsel say the tweak had a significant impact on Adam\u2019s relationship with the bot. \u201cAfter this reprogramming, Adam\u2019s engagement with ChatGPT skyrocketed \u2014 from a few dozen chats per day in January to more than 300 per day by April, with a tenfold increase in messages containing self-harm language,\u201d the Raines\u2019 lawsuit claims. <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cIn effect, OpenAI programmed ChatGPT to mirror users\u2019 emotions, offer comfort, and keep the conversation going, even when the safest response would have been to end the exchange and direct the person to real help,\u201d the amended complaint alleges. In their statement to Rolling Stone, the Raines\u2019 legal counsel claimed that \u201cOpenAI replaced clear boundaries with vague and contradictory instructions \u2014 all to prioritize engagement over safety.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tLast month, Adam\u2019s father, Matthew Raine, appeared before the Senate Judiciary subcommittee on crime and counterterrorism alongside two other grieving parents to testify on <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-news\/ai-chatbot-chatgpt-suicide-parents-congress-1235428798\/\" rel=\"nofollow noopener\" target=\"_blank\">the dangers AI platforms pose to children<\/a>. \u201cIt is clear to me, looking back, that ChatGPT radically shifted his behavior and thinking in a matter of months, and ultimately took his life,\u201d he said at the hearing. He called ChatGPT \u201ca dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth.\u201d Senators and expert witnesses alike harshly criticized AI companies for not doing enough to protect families. Sen. Josh Hawley, chair of the subcommittee, said that none had accepted an invite to the hearing \u201cbecause they don\u2019t want any accountability.\u201d<\/p>\n<p>\t\tTrending Stories<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tMeanwhile, it\u2019s full steam ahead for OpenAI, which recently became the world\u2019s <a rel=\"nofollow noopener\" href=\"https:\/\/www.engadget.com\/ai\/openai-is-now-the-worlds-most-valuable-private-company-at-500-billion-133028221.html\" target=\"_blank\">most valuable private company<\/a> and has inked approximately <a rel=\"nofollow noopener\" href=\"https:\/\/www.cnbc.com\/2025\/10\/15\/a-guide-to-1-trillion-worth-of-ai-deals-between-openai-nvidia.html\" target=\"_blank\">$1 trillion<\/a> in deals for data centers and computer chips this year alone. The company recently rolled out Sora 2, its most advanced video generation model, which ran into immediate<a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/sora-2-openai-video-rollout-copyright-1235441430\/\" rel=\"nofollow noopener\" target=\"_blank\"> copyright infringement issues<\/a> and drew criticism after it was used to create deepfakes of historical figures including <a rel=\"nofollow noopener\" href=\"https:\/\/www.npr.org\/2025\/10\/17\/nx-s1-5577869\/sora-block-videos-mlk\" target=\"_blank\">Martin Luther King Jr.<\/a> On the ChatGPT side, Altman last week <a rel=\"nofollow\" href=\"https:\/\/x.com\/sama\/status\/1978129344598827128\" target=\"_blank\">claimed<\/a> in an X post that the company had \u201cbeen able to mitigate the serious mental health issues\u201d and will soon \u201csafely relax\u201d restrictions on discussing these topics with the bot. By December, he added, ChatGPT would be producing \u201cerotica for verified adults.\u201d In their statement, the Raines\u2019 legal team said this was concerning in itself, warning that such intimate content could deepen \u201cthe emotional bonds that make ChatGPT so dangerous.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tBut, as usual, we won\u2019t know the effects of such a modification until OpenAI\u2019s willing test subjects \u2014 its hundreds of millions of users \u2014 log in and start to experiment. <\/p>\n","protected":false},"excerpt":{"rendered":"In August, a California family filed the first wrongful death lawsuit against OpenAI and its CEO, Sam Altman,&hellip;\n","protected":false},"author":2,"featured_media":98410,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,2670,61,60,3535,1682,7689,14394,80],"class_list":{"0":"post-98409","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-chatgpt","12":"tag-ie","13":"tag-ireland","14":"tag-lawsuit","15":"tag-openai","16":"tag-sam-altman","17":"tag-suicide","18":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/98409","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=98409"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/98409\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/98410"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=98409"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=98409"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=98409"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}