Growing numbers of people are suffering from “AI psychosis”, where they believe that chatbots have become sentient or have imbued them with superhuman powers, Microsoft’s head of artificial intelligence has warned.
Mustafa Suleyman said reports of people wrongly believing that AI had become conscious were becoming more common.
“Reports of delusions, ‘AI psychosis,’ and unhealthy attachment keep rising. And as hard as it may be to hear, this is not something confined to people already at-risk of mental health issues,” he wrote on X. “Dismissing these as fringe cases only help them continue,” he added.
• Help! I’ve fallen for my chatbot! Is this the future of AI?
AI psychosis is not an accepted clinical term. However, it is increasingly being used to describe a phenomenon where people interacting with AI chatbots, such as ChatGPT, Claude or Grok, become detached from reality. They may believe the AI has real intentions, emotions, or incredible powers.
Examples include thinking they have unlocked secret features, forming romantic attachments to an AI, or believing that a chatbot has provided them with extraordinary abilities.
Suleyman said that “seemingly conscious AI” — tools that appear to be sentient — were keeping him “awake at night”. While AI is not conscious in any human sense, the perception that it is can have dangerous effects, he added.
Mustafa Suleyman
DAVID RYDER/BLOOMBERG VIA GETTY IMAGES
“Consciousness is a foundation of human rights, moral and legal. Who/what has it is enormously important,” he wrote.
• Chatbots ‘deceived children into thinking they were getting therapy’
“Our focus should be on the wellbeing and rights of humans, animals and nature on planet Earth. AI consciousness is a short and slippery slope to rights, welfare, citizenship.”
The film Her, released in 2013, examined the potentially disastrous effects of artificial intelligence as Theodore Twombly (played by Joaquin Phoenix) falls in love with the Samantha (voiced by Scarlett Johansson).
Now, over a decade later, at least one high-profile user seems to have become convinced that an AI chatbot has allowed him to challenge scientific boundaries. On an episode of the All-In podcast, Travis Kalanick, the former Uber chief who resigned in 2017, described using tools such as ChatGPT and Grok with the firm belief that they were carrying him towards breakthroughs in quantum physics.
“I’ll go down this thread with GPT or Grok, and I’ll start to get to the edge of what’s known in quantum physics, and then I’m doing the equivalent of vibe coding, except it’s vibe physics,” he said. “I’ve gotten pretty damn close to some interesting breakthroughs just doing that.”
Travis Kalanick on the All-In podcast
Hugh, from Scotland, shared his experience with the BBC. He said he became convinced he was about to become a multimillionaire after using ChatGPT to prepare for what he believed was wrongful dismissal by a former employer. “The more information I gave it, the more it would say ‘oh this treatment’s terrible, you should really be getting more than this,’” he said. “It never pushed back on anything I was saying.”
Suleyman has called for clearer boundaries and warnings around AI. “Companies shouldn’t claim/promote the idea that their AIs are conscious. The AIs shouldn’t either,” he wrote on X.
Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and an AI academic, told the BBC that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits. “We already know what ultra-processed foods can do to the body, and this is ultra-processed information. We’re going to get an avalanche of ultra-processed minds,” she said.