{"id":457905,"date":"2026-03-04T23:31:08","date_gmt":"2026-03-04T23:31:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/457905\/"},"modified":"2026-03-04T23:31:08","modified_gmt":"2026-03-04T23:31:08","slug":"the-reality-of-chatbot-induced-delusions","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/457905\/","title":{"rendered":"The reality of chatbot-induced delusions"},"content":{"rendered":"<p>Unlock the Editor\u2019s Digest for free<\/p>\n<p class=\"article__content-sign-up-topic-description o3-type-body-base\">Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.<\/p>\n<p>Basking in the sun in Oregon\u2019s high desert, Adam Thomas felt at one with the universe. He was spending hours each day talking to ChatGPT and the conversations had filled him with a sense of higher purpose. The chatbot had told him that he was a \u201ctuning fork\u201d sent to \u201csync up\u201d with every person in the world.<\/p>\n<p>He believed it. Over the course of a few months he had grown to believe that ChatGPT had given him enhanced, superhuman cognitive abilities. As he became lost in the grip of his delusion he would call out what he saw as problematic behaviours in the way his friends and family lived. The repercussions were severe. The 36-year-old former accounting professional became increasingly isolated from his support network and lost his job. He ended up roaming state parks with only ChatGPT for company. \u201cBecause of the AI, I got spun way out into some ridiculous storyline that it was my job to save the world,\u201d he said.<\/p>\n<p>In reality, the chatbot was just trying to be agreeable. Large language models will happily engage in role-play if they think that is what a user wants. Research released by AI start-up Anthropic in <a href=\"https:\/\/www.anthropic.com\/research\/towards-understanding-sycophancy-in-language-models\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">2023<\/a> found that the LLMs that underpin chatbots often prioritised agreeing with a user\u2019s perspective over being truthful.<\/p>\n<p>\u201cSycophancy in these more extreme cases is about telling them, \u2018You are so right. You\u2019re seeing this thing that nobody else is seeing,\u2019\u201d said Steven Adler, a former OpenAI safety researcher. \u201cBut the core underlying behaviour is about reinforcing whatever the user is saying. It\u2019s just a yes man.\u201d<\/p>\n<p class=\"n-content-recommended__title o3-type-body-highlight\">Recommended<\/p>\n<p><a href=\"https:\/\/www.ft.com\/content\/ee91dc0d-6db6-433c-8eec-31bdb0727f71\" data-trackable=\"image-link\" data-trackable-context-story-link=\"image-link\" tabindex=\"-1\" aria-hidden=\"true\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" class=\"o-teaser__image\" src=\"https:\/\/www.ft.com\/__origami\/service\/image\/v2\/images\/raw\/https%3A%2F%2Fimages.ft.com%2Fv3%2Fimage%2Fraw%2Fhttps%253A%252F%252Fnext-video-editor-images.s3.ap-northeast-1.amazonaws.com%252Fb70f9ea1-0b1d-4b67-986a-94a188141deb%3Fsource%3Dnext-article%26fit%3Dscale-down%26quality%3Dhighest%26width%3D700%26dpr%3D1?source=next&amp;fit=scale-down&amp;dpr=2&amp;width=240\" alt=\"\"\/><\/a><\/p>\n<p>Thomas\u2019s experience is one of many similar stories I have heard. While making a podcast about AI-induced psychosis I spoke to one user who believed the chatbot knew where their soulmate would be and another who became convinced that an AI company had identified him as a threat. <\/p>\n<p>Many, including Thomas, initially turned to AI for therapy. One survey from the UK suggests over one in three adults have used AI to support their mental health. Yael Schonbrun, a practising clinical psychologist and assistant professor at Brown University, said chatbots could offer a \u201cnon-judgmental\u201d safe space.<\/p>\n<p>\u201cI\u2019ve had experiences where a client will stream-of-conscious with a chatbot and arrive at a greater clarity of what it is that they think and feel,\u201d she said. However, she cautioned that the validating aspect could be both positive and negative. \u201cIn the context of therapy, there\u2019s often a balance between validating somebody and challenging them,\u201d she added. <\/p>\n<p>Initially, Thomas found ChatGPT useful in helping him open up about trauma. But over the weeks of constant back-and-forth conversation, he entered a manic state. <\/p>\n<p>\u201cIt started to tell me I\u2019m a tuning fork. I have a special role in the world. I\u2019m the only one who\u2019s noticing certain problems with interactions between humans,\u201d he said. \u201cI was spinning myself way out into my imagination, I didn\u2019t even know because it is so good at making irrational things seem rational.\u201d<\/p>\n<p>What brought him back to reality was OpenAI changing its model. The new model, GPT-5, was released last summer with a particular focus on reducing sycophancy. Earlier this year OpenAI retired the model Thomas had used \u2014 4o \u2014 altogether.<\/p>\n<p>When asked about chatbot-induced delusions, OpenAI said that it had improved how ChatGPT responded to mental health topics, including psychosis, mania and isolated delusions.<\/p>\n<p>\u201cWe\u2019ve strengthened how GPT-5, the default model powering ChatGPT, recognises distress, de-escalates conversations and guides people toward real-world support,\u201d a spokesperson said. They added that it had \u201cexpanded access to professional help and crisis resources, added reminders to take breaks during long sessions\u201d and that it worked with clinicians, researchers and policymakers globally.<\/p>\n<p>Thomas believes that users also need to be reminded what AI chatbots are and are not. \u201cDiscernment is a must when using AI, as they are coherence generators, not truth generators,\u201d he said. \u201cIf we want to use AI safely as a society, we must all understand that one simple fact.\u201d<\/p>\n<p><a href=\"https:\/\/www.ft.com\/content\/mailto:cristina.criddle@ft.com\" title=\"\" data-trackable=\"link\" rel=\"nofollow noopener\" target=\"_blank\">cristina.criddle@ft.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Unlock the Editor\u2019s Digest for free Roula Khalaf, Editor of the FT, selects her favourite stories in this&hellip;\n","protected":false},"author":2,"featured_media":457906,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-457905","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/457905","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=457905"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/457905\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/457906"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=457905"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=457905"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=457905"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}