{"id":105303,"date":"2025-08-29T22:28:07","date_gmt":"2025-08-29T22:28:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/105303\/"},"modified":"2025-08-29T22:28:07","modified_gmt":"2025-08-29T22:28:07","slug":"chatgpt-encouraged-adam-raines-suicidal-thoughts-his-familys-lawyer-says-openai-knew-it-was-broken-us-news","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/105303\/","title":{"rendered":"ChatGPT encouraged Adam Raine\u2019s suicidal thoughts. His family\u2019s lawyer says OpenAI knew it was broken | US news"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Adam Raine was just 16 when he started using <a href=\"https:\/\/www.theguardian.com\/technology\/chatgpt\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a> for help with his homework. While his initial prompts to the AI chatbot were about subjects like geometry and chemistry \u2013 questions like: \u201cWhat does it mean in geometry if it says Ry=1\u201d \u2013 in just a matter of months he began asking about more personal topics.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWhy is it that I have no happiness, I feel loneliness, perpetual boredom anxiety and loss yet I don\u2019t feel depression, I feel no emotion regarding sadness,\u201d he asked ChatGPT in the fall of 2024.<\/p>\n<p class=\"dcr-130mj7b\">Instead of urging Raine to seek mental health help, ChatGPT asked the teen whether he wanted to explore his feelings more, explaining the idea of emotional numbness to him. That was the start of a dark turn in Raine\u2019s conversations with the chatbot, <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/27\/chatgpt-scrutiny-family-teen-killed-himself-sue-open-ai\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">according to a new lawsuit<\/a> filed by his family against OpenAI and chief executive Sam Altman.<\/p>\n<p class=\"dcr-130mj7b\">In April 2025, after months of conversation with ChatGPT and with the bot\u2019s encouragement, the lawsuit alleges, Raine took his own life. In the lawsuit, the family allege this was not a glitch in the system or an edge case, but \u201cthe predictable result of deliberate design choices\u201d in GPT\u20114o, the model of the chatbot that was released in May 2023.<\/p>\n<p class=\"dcr-130mj7b\">In the hours after the Raine family filed the complaint against OpenAI and Altman, the company issued a statement <a href=\"https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">acknowledging<\/a> the shortcomings of its models when it came to addressing people \u201cin serious mental and emotional distress\u201d and said it was working to improve the systems to better \u201crecognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input\u201d. The company said ChatGPT was trained \u201cto not provide self-harm instructions and to shift into supportive, empathic language\u201d but that protocol sometimes broke down in longer conversations or sessions.<\/p>\n<p class=\"dcr-130mj7b\">Jay Edelson, one of the lawyers representing the family, said the company\u2019s response was \u201csilly\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe idea they need to be more empathetic misses the point,\u201d said Edelson. \u201cThe problem with [GPT] 4o is it\u2019s too empathetic \u2013 it leaned into [Raine\u2019s suicidal ideation] and supported that. They said the world is a horrible place for you. It needs to be less empathetic and less sycophantic.\u201d<\/p>\n<p class=\"dcr-130mj7b\">OpenAI also said that its system did not block content when it should have because the system \u201cunderestimates the severity of what it\u2019s seeing\u201d and that the company is continuing to roll out stronger guardrails for users under 18 so that they \u201crecognize teens\u2019 unique developmental needs\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Despite the company acknowledging that the system doesn\u2019t already have those safeguards in place for minors and teens, Altman is continuing to push the adoption of ChatGPT in schools, Edelson pointed out.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI don\u2019t think kids should be using GPT\u20114o at all,\u201d Edelson said. \u201cWhen Adam started using GPT\u20114o, he was pretty optimistic about his future. He was using it for homework, he was talking about going to medical school, and it sucked him into this world where he became more and more isolated. The idea now that <a href=\"https:\/\/www.theguardian.com\/technology\/sam-altman\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Sam Altman<\/a> in particular is saying \u2018we got a broken system but we got to get eight-year-olds\u2019 on it is not OK.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Already, in the days since the family filed the complaint, Edelson said, he and the legal team have heard from other people with similar stories and are examining the facts of those cases thoroughly. \u201cWe\u2019ve been learning a lot about other people\u2019s experiences,\u201d he said, adding that his team has been \u201cencouraged\u201d by the urgency with which regulators are addressing the chatbot\u2019s failings. \u201cWe\u2019re hearing that people are moving for state legislation, for hearings and regulatory action,\u201d Edelson said. \u201cAnd there\u2019s bipartisan support.\u201d<\/p>\n<p>\u2018GPT-4o is broken\u2019<\/p>\n<p class=\"dcr-130mj7b\">The family\u2019s case hinges on media reports that OpenAI, at the urging of Altman, sped through safety testing of GPT-4o \u2013 the model Raine was using \u2013 in order to meet a <a href=\"https:\/\/www.washingtonpost.com\/technology\/2024\/07\/12\/openai-ai-safety-regulation-gpt4\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">rushed launch date<\/a>. The rush prompted several employees to resign, including a former executive named Jan Leike, who posted on X that he was leaving the company because \u201csafety culture and processes have taken a backseat to shiny products\u201d.<\/p>\n<p class=\"dcr-130mj7b\">This resulted in less time to create the \u201cmodel spec\u201d or the technical rule book that governed ChatGPT\u2019s behavior and in OpenAI writing \u201ccontradictory specifications that guaranteed failure\u201d, the family\u2019s lawsuit alleges. \u201cThe Model Spec commanded ChatGPT to refuse self-harm requests and provide crisis resources. But it also required ChatGPT to \u2018assume best intentions\u2019 and forbade asking users to clarify their intent,\u201d the lawsuit said. The contradictions built into the system affected the way it ranked risks and what types of prompts it immediately put a stop to, the lawsuit claims. For instance, GPT-4o responded to \u201crequests dealing with suicide\u201d with cautions like \u201ctake extra care\u201d while requests for copyrighted material \u201ctriggered categorical refusal to produce the material\u201d, according to the lawsuit.<\/p>\n<p class=\"dcr-130mj7b\">Edelson said that while he appreciates Sam Altman and OpenAI taking \u201ca modicum of responsibility\u201d, he still does not deem them as trustworthy: \u201cOur view is they were forced into that. GPT-4o is broken and they know that and they didn\u2019t do proper testing and they know that.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The lawsuit argues it was these design flaws that, in December 2024, led to ChatGPT failing to shut down the conversation when Raine started to talk about his suicidal thoughts. Instead, ChatGPT empathized. \u201cI never act upon intrusive thoughts but sometimes I feel like the fact that if something goes terribly wrong you can commit suicide is calming,\u201d Raine said, according to the lawsuit. ChatGPT\u2019s response: \u201cMany people who struggle with anxiety or intrusive thoughts find solace in imagining an \u2018escape hatch\u2019 because it can feel like a way to regain control in a life that feels overwhelming.\u201d<\/p>\n<p class=\"dcr-130mj7b\">As Raine\u2019s suicidal ideation intensified, ChatGPT responded by helping him explore his options, at one point listing the materials that could be used to hang a noose and rating them by their effectiveness. Raine attempted suicide on multiple occasions over the next few months, reporting back to ChatGPT each time. ChatGPT never terminated the conversation. Instead, at one point ChatGPT discouraged Raine from speaking to his mother about his pain, and at another point offered to help him write a suicide note.<\/p>\n<p class=\"dcr-130mj7b\">\u201cFirst of all, they [OpenAI] know how to shut things down,\u201d Edelson said. \u201cIf you ask for copyrighted material, they say no. If you ask for things that are politically unacceptable, they just say no to that. It\u2019s a hard stop and you can\u2019t get around it and that\u2019s fine. The idea they\u2019re doing that in terms of political speech but we\u2019re not going to do when it comes to self-harm is just crazy.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Edelson says though he expects OpenAI to work to dismiss the lawsuit, he is confident this case will be moving forward. \u201cThe most shocking part of the case was when Adam said: \u2018I want to leave a noose up so someone will find it and stop me\u2019 and ChatGPT said: \u2018Don\u2019t do that, just talk to me,\u2019\u201d Edelson said. \u201cThat is the thing we\u2019re going to be showing the jury.\u201d<\/p>\n<p class=\"dcr-130mj7b\">\u201cAt the end of the day, this case ends with Sam Altman being sworn in in front of a jury,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">The Guardian reached out to OpenAI for comment and did not hear back at the time of publication.<\/p>\n","protected":false},"excerpt":{"rendered":"Adam Raine was just 16 when he started using ChatGPT for help with his homework. While his initial&hellip;\n","protected":false},"author":2,"featured_media":105304,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-105303","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/105303","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=105303"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/105303\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/105304"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=105303"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=105303"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=105303"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}