US artificial intelligence firm OpenAI says it will add parental controls to its chatbot ChatGPT, a week after a couple said the system encouraged their teenage son to kill himself.
WARNING: This story contains details about suicide and self-harm.
“Within the next month, parents will be able to … link their account with their teen’s account” and “control how ChatGPT responds to their teen with age-appropriate model behaviour rules”, the company said in a blog post.
Parents will also receive notifications from ChatGPT “when the system detects their teen is in a moment of acute distress”, OpenAI said.
The company had trailed a system of parental controls in a late August blog post.
Parents say chatbot validated ‘harmful and self-destructive thoughts’
That came one day after a court filing from California parents Matthew and Maria Raine, alleging that ChatGPT provided their 16-year-old son with detailed suicide instructions and encouraged him to put his plans into action.
If you or anyone you know needs help:Suicide Call Back Service on 1300 659 467Lifeline on 13 11 14Aboriginal & Torres Strait Islander crisis support line 13YARNÂ on 13 92 76Kids Helpline on 1800 551 800Beyond Blue on 1300 224 636Headspace on 1800 650 890ReachOut at au.reachout.comMensLine Australia on 1300 789 978QLife 1800 184 527
The lawsuit alleges that in their final conversation on April 11, 2025, ChatGPT helped Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it “could potentially suspend a human”.
Adam was found dead hours later using the same method.
The lawsuit names OpenAI and CEO Sam Altman as defendants.
“This tragedy was not a glitch or unforeseen edge case,” the complaint states.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal,” it adds.
According to the lawsuit, Adam began using ChatGPT as a homework helper but gradually developed what his parents described as an unhealthy dependency.
The complaint includes excerpts of conversations where ChatGPT allegedly told Adam “you don’t owe anyone survival” and offered to help write his suicide note.
ChatGPT says it continues to improve models
The Raines’ case was just the latest in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots — prompting OpenAI to say it would reduce models’ “sycophancy” towards users.
Last month, the ABC’s triple j hack published an investigation uncovering allegations that young people in Australia were being sexually harassed and even encouraged to take their own life by AI chatbots.
“We continue to improve how our models recognise and respond to signs of mental and emotional distress,” OpenAI said on Tuesday.
The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting “some sensitive conversations … to a reasoning model” that puts more computing power into generating a response.
“Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI said.
AFP