Sam Altman-led OpenAI estimates that around 0.15% of ChatGPT’s weekly users discuss suicidal thoughts or plans, even as it warns that the chatbot is not a therapist, according to a latest blog post published on Monday. While a small fraction, the number is significant given the platform’s massive global reach.

OpenAI says the new GPT-5 model, which powers ChatGPT by default, reduces unsafe or non-compliant responses in mental-health-related chats by as much as 80%, and performs substantially better when users show signs of psychosis, mania, or emotional over-reliance on the chatbot.

ChatGPT is not a therapist

The update comes after months of work with psychiatrists and psychologists in OpenAI’s Global Physician Network, a group of nearly 300 clinicians across 60 countries. More than 170 of them directly contributed to the new system, writing and scoring responses, defining safe behaviour, and reviewing how the model handles sensitive scenarios.

Notably, the company said that the goal is not to turn ChatGPT into a therapist, but to ensure it recognises signs of distress and gently redirects users to professional or real-world support. The model now connects people more reliably to crisis helplines and occasionally nudges users to take breaks during longer or emotionally charged sessions.

How GPT-5 responds to mental-health-related queries

OpenAI’s internal testing shows that in production traffic, the GPT-5 model produced 65–80% fewer unsafe responses than previous versions when users displayed signs of mental-health distress.

The Sam Altman-led company noted that in structured evaluations graded by independent clinicians, GPT-5 cut undesirable replies by 39% to 52% compared with GPT-4o. Automated testing scored it 91–92% compliant with desired behaviour, up from 77% or lower in older models.

The system also handled lengthy or complex conversations more reliably, maintaining over 95% consistency even in multi-turn dialogues, where earlier models often faltered.

How ChatGPT tackles emotional attachment

A newer challenge OpenAI is taking on is emotional reliance, when users form unhealthy attachments to the chatbot itself. Using a new taxonomy to identify and measure that behaviour, OpenAI says GPT-5 now produces 80% fewer problematic replies in these scenarios, often steering users toward human connection instead of validating emotional dependence.

Still, OpenAI admits these mental-health conversations are rare and hard to quantify precisely. At such low prevalence, fractions of a per cent, even small variations, can distort results. And experts do not always agree on what “safe” looks like: clinicians reviewing the model’s responses reached the same judgment only 71–77% of the time.