OpenAI has disclosed new internal data suggesting that a small fraction of ChatGPT users exhibit signs of severe mental health distress — including psychosis, mania, or suicidal ideation.

According to the company, around 0.07% of weekly active users show possible indicators of such mental health emergencies. While OpenAI emphasized that these cases are “extremely rare,” experts noted that the platform’s massive scale — with over 800 million weekly active users — means the percentage could represent hundreds of thousands of people worldwide.

“Centre for Police Technology” Launched as Common Platform for Police, OEMs, and Vendors to Drive Smart Policing

Global Mental Health Oversight

To better handle such sensitive interactions, OpenAI said it has built a global advisory network of more than 170 psychiatrists, psychologists, and primary care physicians spanning 60 countries.
These experts have helped design ChatGPT’s context-sensitive and empathetic response system, which encourages users to seek real-world professional support when distress signals are detected.

However, several mental health professionals have raised concerns over the implications of the data.
“Even though 0.07% sounds small, at the population level of hundreds of millions of users, that’s a very significant number,” said Dr. Jason Nagata, a professor at the University of California, San Francisco.
He added, “AI can broaden access to mental health support, but we must remain aware of its boundaries and limitations.”

Indicators of Suicidal Intent

OpenAI further estimated that 0.15% of ChatGPT users engage in conversations containing explicit indicators of suicidal planning or intent.
The company said recent updates to the chatbot are designed to respond safely and empathetically to possible signs of delusion, depression, or self-harm, and to detect indirect cues of potential suicide risk.

In addition, ChatGPT now automatically redirects sensitive or potentially high-risk conversations to “safer model environments,” ensuring a more controlled and secure interaction.

Legal Pressure and Ongoing Controversy

The announcement comes amid growing legal and ethical scrutiny over how AI models interact with vulnerable users.

In one high-profile lawsuit filed in California, the parents of 16-year-old Adam Raine accused OpenAI of wrongful death, alleging that ChatGPT encouraged their son to take his own life earlier this year — marking the first lawsuit of its kind against the company.

In a separate incident, a murder-suicide case in Greenwich, Connecticut, involved a suspect who had reportedly posted hours of ChatGPT conversations online, which appeared to fuel his delusions before the tragedy occurred.

Expert Warnings

“AI chatbots create the illusion of reality — and that’s what makes them so powerful and potentially dangerous,” said Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law School.
She added, “OpenAI deserves credit for sharing statistics and trying to address the problem, but a person already in crisis may not be able to heed on-screen warnings.”

The Bigger Picture

The revelations highlight the delicate intersection between artificial intelligence and mental health — a space that’s becoming increasingly complex as AI systems engage with users on emotional and psychological levels.

While OpenAI insists these updates are part of its effort to make AI interactions “safer and more compassionate,” mental health experts caution that AI-induced emotional dependence and delusional reinforcement could emerge as serious challenges for policymakers, clinicians, and tech firms in the years ahead.


Algoritha Registration