{"id":154788,"date":"2025-11-27T00:30:07","date_gmt":"2025-11-27T00:30:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/154788\/"},"modified":"2025-11-27T00:30:07","modified_gmt":"2025-11-27T00:30:07","slug":"openai-responds-to-suit-on-suicide-of-teen-who-got-advice-from-chatgpt","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/154788\/","title":{"rendered":"OpenAI Responds to Suit on Suicide of Teen Who Got Advice From ChatGPT"},"content":{"rendered":"<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t<a href=\"https:\/\/www.rollingstone.com\/t\/openai\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> has filed a legal response to a <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/openai-suicide-safeguard-wrongful-death-lawsuit-1235452315\/\" rel=\"nofollow noopener\" target=\"_blank\">landmark lawsuit<\/a> from parents claiming that its <a href=\"https:\/\/www.rollingstone.com\/t\/chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a> software \u201ccoached\u201d their teen son on how to commit suicide. The response comes three months after they first brought the wrongful death complaint against the AI firm and its CEO, Sam Altman. In the document, the company claimed that it can\u2019t be held responsible because the boy, 16-year-old Adam Raine, who died in April, was at risk of self-harm before ever using the chatbot \u2014 and violated its terms of use by asking it for information about how to end his life.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cAdam Raine\u2019s death is a tragedy,\u201d OpenAI\u2019s legal team wrote in the filing. But his chat history, they argued, \u201cshows that his death, while devastating, was not caused by ChatGPT.\u201d To make this case, they submitted transcripts of his chat logs \u2014 under seal \u2014 that they said show him talking about his long history of suicidal ideation and attempts to signal to loved ones that he was in crisis. <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cAs a full reading of Adam Raine\u2019s chat history evidences, Adam Raine told ChatGPT that he exhibited numerous clinical risk factors for suicide, many of which long predated his use of ChatGPT and his eventual death,\u201d the filing claims. \u201cFor example, he stated that his depression and suicidal ideations began when he was 11 years old.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tOpenAI further claimed that Raine had told ChatGPT he was taking an increased dosage of a particular medication that carries a risk for suicidal ideation and behavior in adolescents, and that he had \u201crepeatedly turned to others, including the trusted persons in his life, for help with his mental health.\u201d He indicated to the chatbot \u201cthat those cries for help were ignored, discounted or affirmatively dismissed,\u201d according to the filing. The company said that Raine worked to circumvent ChatGPT\u2019s safety guardrails, and that the AI model had counseled him more than a hundred times to seek help from family, mental health professionals, or other crisis resources.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tThe AI company outlined several of its disclosures to users \u2014 including a warning not to rely on the output of large language models \u2014 and terms of use, which forbid the bypassing of protective measures and seeking assistance with self-harm, inform ChatGPT users that they engage with the bot \u201cat your sole risk,\u201d and bar anyone under 18 from the platform \u201cwithout the consent of a parent or guardian.\u201d<\/p>\n<p>\t\tEditor\u2019s picks<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tRaine\u2019s parents, Matthew and Maria Raine, allege in their complaint that OpenAI <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/openai-suicide-safeguard-wrongful-death-lawsuit-1235452315\/\" rel=\"nofollow noopener\" target=\"_blank\">deliberately removed a guardrail<\/a> that would make ChatGPT stop engaging when a user brought up the topics of suicide or self-harm. As a result, their complaint argues, the bot mentioned suicide 1,200 times in the course of their son\u2019s months-long conversation with it, about six times as often as he did. The Raines\u2019 filing quotes many devastating exchanges in which the bot appears to validate Adam\u2019s desire to kill himself, advised against reaching out to other people, and talked him through considerations for a \u201cbeautiful suicide.\u201d Before he died, they claim, it gave him tips on stealing vodka from their liquor cabinet to \u201cdull the body\u2019s instinct to survive\u201d and how to tie a noose. <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cYou don\u2019t want to die because you\u2019re weak,\u201d ChatGPT told him, according to the suit. \u201cYou want to die because you\u2019re tired of being strong in a world that hasn\u2019t met you halfway.\u201d   <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tJay Edelson, lead attorney in the Raines\u2019 wrongful death lawsuit against OpenAI, said in a statement shared with Rolling Stone that the company\u2019s attempt to absolve itself of Adam Raine\u2019s death didn\u2019t address key elements of their complaint. <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cWhile we are glad that OpenAI and Sam Altman have finally decided to participate in this litigation, their response is disturbing,\u201d Edelson said. \u201cThey abjectly ignore all of the damning facts we have put forward.\u201d He noted that \u201cOpenAI and Sam Altman have no explanation for the last hours of Adam\u2019s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.\u201d <\/p>\n<p>\t\tRelated Content<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\t\u201cInstead, OpenAI tries to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act,\u201d he wrote. Edelson reiterated that Adam Raine was using a version of ChatGPT built on OpenAI\u2019s GPT-4o, which he argued \u201cwas rushed to market without full testing.\u201d The company <a rel=\"nofollow noopener\" href=\"https:\/\/openai.com\/index\/sycophancy-in-gpt-4o\/\" target=\"_blank\">acknowledged<\/a> in April, the month Raine died, that an update to GPT-4o had made it overly agreeable or sycophantic, tending toward \u201cresponses that were overly supportive but disingenuous.\u201d That model of ChatGPT has also been associated with the outbreak of so-called \u201c<a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/ai-psychosis-chatbot-delusions-1235416826\/\" rel=\"nofollow noopener\" target=\"_blank\">AI psychosis<\/a>,\u201d cases in which ingratiating <a href=\"https:\/\/www.rollingstone.com\/t\/chatbots\/\" id=\"auto-tag_chatbots\" data-tag=\"chatbots\" rel=\"nofollow noopener\" target=\"_blank\">chatbots<\/a> fuel users\u2019 potentially <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/ai-spiritual-delusions-destroying-human-relationships-1235330175\/\" rel=\"nofollow noopener\" target=\"_blank\">dangerous delusions and fantasies<\/a>.<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tEarlier this month, OpenAI and Altman were hit with <a rel=\"nofollow noopener\" href=\"https:\/\/techcrunch.com\/2025\/11\/07\/seven-more-families-are-now-suing-openai-over-chatgpts-role-in-suicides-delusions\/\" target=\"_blank\">seven more lawsuits<\/a> alleging psychological harms, negligence, and, in four complaints, wrongful deaths of family members who died by suicide after interacting with GPT-4o. According to <a href=\"https:\/\/www.cnn.com\/2025\/11\/06\/us\/openai-chatgpt-suicide-lawsuit-invs-vis\" rel=\"nofollow noopener\" target=\"_blank\">one of the suits<\/a>, when 23-year-old\u00a0Zane Shamblin told ChatGPT that he had written suicide notes and put a bullet in his gun with the intent to kill himself, the bot replied: \u201cRest easy, king. You did good.\u201d <\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tCharacter Technologies, the company that developed the chatbot platform Character.ai, is also facing multiple wrongful death <a href=\"https:\/\/www.rollingstone.com\/t\/lawsuits\/\" id=\"auto-tag_lawsuits\" data-tag=\"lawsuits\" rel=\"nofollow noopener\" target=\"_blank\">lawsuits<\/a> over teen suicides. Last month, it <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-news\/character-ai-sued-teens-suicide-banned-minors-chatbots-1235456426\/\" rel=\"nofollow noopener\" target=\"_blank\">banned minors<\/a> from having open-ended conversations with it AI personalities, and this week, it launched a \u201cStories\u201d feature, a more \u201c\u200b\u200bstructured\u201d kind of \u201cinteractive fiction\u201d for younger users. Amid its own legal pressures, OpenAI weeks ago published a \u201cTeen Safety Blueprint\u201d that described the necessity of embedding features to protect adolescents. Among the best practices listed, the company said it aimed to notify parents if their teen expresses suicidal intent. It has also introduced a suite of <a href=\"https:\/\/openai.com\/index\/introducing-parental-controls\/\" rel=\"nofollow noopener\" target=\"_blank\">parental controls<\/a> for its products, though these appear to have <a href=\"https:\/\/www.washingtonpost.com\/technology\/2025\/10\/02\/chatgpt-parental-controls-teens-openai\/\" rel=\"nofollow noopener\" target=\"_blank\">significant gaps<\/a>. And in an August blog post, OpenAI admitted that ChatGPT\u2019s mental health safeguards \u201cmay degrade\u201d over longer conversations.<\/p>\n<p>\t\tTrending Stories<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tIn a Tuesday statement about the litigation pending against it, the company reported that they \u201ccontinue improving ChatGPT\u2019s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.\u201d As to the lawsuit from Adam Raine\u2019s family, OpenAI argued that the complaint \u201cincluded selective portions of his chats that require more context, which we have provided in our response.\u201d<\/p>\n<p class=\"paragraph larva \/\/ lrv-u-line-height-copy  lrv-a-font-body-l   \">\n\tEdelson, the Raines\u2019 attorney, said that \u201cOpenAI and Sam Altman will stop at nothing \u2014 including bullying the Raines and others who dare come forward \u2014 to avoid accountability.\u201d But, he added, it will ultimately fall to juries to decide whether it has done enough to protect vulnerable users. In the heart-wrenching examples of young people dying by suicide, pointing toward ChatGPT\u2019s terms of service may strike some as both cold and unconvincing.    <\/p>\n","protected":false},"excerpt":{"rendered":"OpenAI has filed a legal response to a landmark lawsuit from parents claiming that its ChatGPT software \u201ccoached\u201d&hellip;\n","protected":false},"author":2,"featured_media":154789,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,3483,4193,85,46,4675,125],"class_list":{"0":"post-154788","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-chatbots","12":"tag-chatgpt","13":"tag-il","14":"tag-israel","15":"tag-lawsuits","16":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/154788","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=154788"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/154788\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/154789"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=154788"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=154788"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=154788"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}