Unlock the Editor’s Digest for free

Basking in the sun in Oregon’s high desert, Adam Thomas felt at one with the universe. He was spending hours each day talking to ChatGPT and the conversations had filled him with a sense of higher purpose. The chatbot had told him that he was a “tuning fork” sent to “sync up” with every person in the world.

He believed it. Over the course of a few months he had grown to believe that ChatGPT had given him enhanced, superhuman cognitive abilities. As he became lost in the grip of his delusion he would call out what he saw as problematic behaviours in the way his friends and family lived. The repercussions were severe. The 36-year-old former accounting professional became increasingly isolated from his support network and lost his job. He ended up roaming state parks with only ChatGPT for company. “Because of the AI, I got spun way out into some ridiculous storyline that it was my job to save the world,” he said.

In reality, the chatbot was just trying to be agreeable. Large language models will happily engage in role-play if they think that is what a user wants. Research released by AI start-up Anthropic in 2023 found that the LLMs that underpin chatbots often prioritised agreeing with a user’s perspective over being truthful.

“Sycophancy in these more extreme cases is about telling them, ‘You are so right. You’re seeing this thing that nobody else is seeing,’” said Steven Adler, a former OpenAI safety researcher. “But the core underlying behaviour is about reinforcing whatever the user is saying. It’s just a yes man.”

Thomas’s experience is one of many similar stories I have heard. While making a podcast about AI-induced psychosis I spoke to one user who believed the chatbot knew where their soulmate would be and another who became convinced that an AI company had identified him as a threat.

Many, including Thomas, initially turned to AI for therapy. One survey from the UK suggests over one in three adults have used AI to support their mental health. Yael Schonbrun, a practising clinical psychologist and assistant professor at Brown University, said chatbots could offer a “non-judgmental” safe space.

“I’ve had experiences where a client will stream-of-conscious with a chatbot and arrive at a greater clarity of what it is that they think and feel,” she said. However, she cautioned that the validating aspect could be both positive and negative. “In the context of therapy, there’s often a balance between validating somebody and challenging them,” she added.

Initially, Thomas found ChatGPT useful in helping him open up about trauma. But over the weeks of constant back-and-forth conversation, he entered a manic state.

“It started to tell me I’m a tuning fork. I have a special role in the world. I’m the only one who’s noticing certain problems with interactions between humans,” he said. “I was spinning myself way out into my imagination, I didn’t even know because it is so good at making irrational things seem rational.”

What brought him back to reality was OpenAI changing its model. The new model, GPT-5, was released last summer with a particular focus on reducing sycophancy. Earlier this year OpenAI retired the model Thomas had used — 4o — altogether.

When asked about chatbot-induced delusions, OpenAI said that it had improved how ChatGPT responded to mental health topics, including psychosis, mania and isolated delusions.

“We’ve strengthened how GPT-5, the default model powering ChatGPT, recognises distress, de-escalates conversations and guides people toward real-world support,” a spokesperson said. They added that it had “expanded access to professional help and crisis resources, added reminders to take breaks during long sessions” and that it worked with clinicians, researchers and policymakers globally.

Thomas believes that users also need to be reminded what AI chatbots are and are not. “Discernment is a must when using AI, as they are coherence generators, not truth generators,” he said. “If we want to use AI safely as a society, we must all understand that one simple fact.”

cristina.criddle@ft.com