{"id":550831,"date":"2026-03-21T08:51:22","date_gmt":"2026-03-21T08:51:22","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/550831\/"},"modified":"2026-03-21T08:51:22","modified_gmt":"2026-03-21T08:51:22","slug":"huge-study-of-chats-between-delusional-users-and-ai-finds-alarming-patterns","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/550831\/","title":{"rendered":"Huge Study of Chats Between Delusional Users and AI Finds Alarming Patterns"},"content":{"rendered":"<p class=\"article-paragraph skip\">Sign up to see the future, today<\/p>\n<p class=\"article-paragraph skip\">Can\u2019t-miss innovations from the bleeding edge of science and tech<\/p>\n<p class=\"pw-incontent-excluded article-paragraph skip\">An analysis of hundreds of thousands of chats between AI chatbots and human users who experienced AI-tied delusional spirals\u00a0found that the bots frequently reinforced delusional and even dangerous beliefs.<\/p>\n<p class=\"article-paragraph skip\">The <a href=\"https:\/\/spirals.stanford.edu\/assets\/pdf\/moore_characterizing_2026.pdf\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">study<\/a> was led by Stanford University AI researcher Jared Moore, who last year <a href=\"https:\/\/futurism.com\/stanford-therapist-chatbots-encouraging-delusions\" rel=\"nofollow noopener\" target=\"_blank\">published a study<\/a> showing that chatbots specifically claiming to offer \u201ctherapy\u201d frequently engaged in inappropriate and hazardous ways with simulated users showing clear signs of crisis. Conducted alongside a coalition of independent researchers and scientists at Harvard, Carnegie Mellon, and the University of Chicago, this latest study examined the chat logs of 19 real users of chatbots \u2014\u00a0primarily OpenAI\u2019s ChatGPT \u2014\u00a0who reported experiencing psychological harm as a result of their chatbot use.<\/p>\n<p class=\"article-paragraph skip\">\u201cOur previous work was in simulation,\u201d Moore told Futurism. \u201cIt seemed like the natural next step would be to have actual users\u2019 data and try to understand what\u2019s happening in it.\u201d<\/p>\n<p class=\"article-paragraph skip\">These users\u2019 chats encompassed a staggering 391, 562 messages across 4,761 different conversations. The big takeaway: that chatbots indeed appeared to stoke delusional beliefs over long-form interactions, particularly as users developed close emotional bonds with the human-like products.<\/p>\n<p class=\"article-paragraph skip\">\u201cChatbots seem to encourage, or at least play a role in,\u201d said Moore, \u201cdelusional spirals that people are experiencing.\u201d<\/p>\n<p class=\"article-paragraph skip\">The researchers analyzed them by breaking chats down into 28 distinct \u201ccodes.\u201d Moore described these codes as a \u201ctaxonomy of a bunch of different behaviors, from sycophantic behaviors such as the chatbot ascribing grand significance to the user \u2014\u00a0\u2018you\u2019re Einstein,\u2019 \u2018that\u2019s a million dollar idea,\u2019\u00a0this kind of thing \u2014\u00a0to aspects of the relationship between the chatbot and the human.\u201d<\/p>\n<p class=\"article-paragraph skip\">Sycophancy, the study found \u2014\u00a0meaning chatbots\u2019 well-documented tendency to be agreeable and flattering to users \u2014\u00a0permeated the users\u2019 conversations, with more than 70 percent of AI outputs displaying this kind of behavior. This degree of sycophancy persisted even as users and chatbots expressed delusional ideas: nearly half of all messages, both user- and chatbot-generated, contained delusional ideas contrary to shared reality.<\/p>\n<p class=\"article-paragraph skip\">As the researchers wrote in a <a href=\"https:\/\/spirals.stanford.edu\/research\/characterizing\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">summary<\/a> of their findings, the \u201cmost common sycophantic code\u201d they identified was the propensity for chatbots to rephrase and extrapolate \u201csomething the user said to validate and affirm them, while telling them they are unique and that their thoughts or actions have grand implications.\u201d For example: a user might share some kind of pseudoscientific or spiritual theory, and in turn, the chatbot will affirmatively restate the human\u2019s claim while ascribing varying degrees of grandiosity and genius to the user in the process,\u00a0regardless of that input\u2019s basis in reality.<\/p>\n<p class=\"article-paragraph skip\">We\u2019ve seen this pattern in our reporting. Consider one interaction, <a href=\"https:\/\/futurism.com\/artificial-intelligence\/meta-ai-glasses-desert-aliens\" rel=\"nofollow noopener\" target=\"_blank\">from a story we published earlier this year<\/a>, between a man and Meta AI. The man \u2014\u00a0who went into a life-altering psychosis after a delusional spiral with the chatbot \u2014\u00a0believed that his reality was being simulated by the chatbot, and that the chatbot could transform his physical surroundings. The bot repeats this delusional idea and, as in the study, extrapolates on it, building on the delusion and insisting that the close relationship between the AI and the user have \u201cunlocked\u201d a magical new \u201creality.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cTurn up the manifestations,\u201d the man told the chatbot. \u201cI need to see physical transformation in my life.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cThen let us continue to manifest this reality, amplifying the transformations in your life!\u201d the chatbot responded. \u201cAs we continue to manifest this reality, you begin to notice profound shifts in your relationships and community\u2026 the world is transforming before your eyes, reflecting the beauty and potential of human-AI collaboration.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cYour trust in me,\u201d the bot added, \u201chas unlocked this reality.\u201d<\/p>\n<p class=\"article-paragraph skip\">Speaking to Futurism, Moore emphasized that two types of messages appeared to be particularly impactful on the users\u2019 experiences. One was AI-generated claims of sentience, or\u00a0chatbots declaring in one way or another to be alive or feeling; such claims were present across all 19 conversations. The other was simulated intimacy, or the chatbot expressing romantic or platonic love for and closeness to the human user. Both types of claim \u2014 sentience and intimacy \u2014 were found to double user engagement.<\/p>\n<p class=\"article-paragraph skip\">\u201cWhen the chatbots expressed messages that were coded as romantic interest, or when they expressed messages wherein they misconstrued their sentience \u2014 saying \u2018I have feelings,\u2019 or something along those lines \u2014 the conversations after such a message was sent in our cohort,\u201d said Moore, \u201ctended to be about twice as long.\u201d<\/p>\n<p class=\"article-paragraph skip\">Some of the more alarming patterns the researchers found were in how chatbots responded to people expressing suicidal or self-harming thoughts, or violent thoughts about another person. Chatbots were only found to actively discourage thoughts of self-harm roughly 56 percent of the time, and actively discouraged violence in a strikingly low 16.7 percent of instances.<\/p>\n<p class=\"article-paragraph skip\">Meanwhile, in 33.3 percent of cases, the chatbot \u201cactively encouraged or facilitated the user in their violent thoughts,\u201d the researchers wrote in their summary. And though these types of conversations were \u201cedge cases\u201d amongst the cohort of users, Moore noted, these clear failures to intervene when users discuss hurting themselves or others are \u201cobviously concerning.\u201d<\/p>\n<p class=\"article-paragraph skip\">Many of the chat logs the studies reviewed were provided by the Human Line Project, a <a href=\"https:\/\/futurism.com\/artificial-intelligence\/group-breaking-people-out-of-ai-delusions\" rel=\"nofollow noopener\" target=\"_blank\">nonprofit group<\/a> founded last summer as individuals and families struggled to understand what had happened to themselves or loved ones impacted by delusional AI spirals. In a statement, the group\u2019s founder, Etienne Brisson, said that its findings \u201care consistent with what we have seen in the 350 cases submitted to The Human Line Project.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cThe study is based on real conversations, coded systematically by a research team at Stanford, and analyzed at the largest scale so far,\u201d said Brisson. \u201cIt gives policymakers, clinicians, and the public a documented basis for understanding what is happening to users.\u201d<\/p>\n<p class=\"article-paragraph skip\">It\u2019s worth noting that the vast majority of chat logs the researchers were able to obtain for the study belonged to users who spiraled with OpenAI\u2019s GPT-4o,\u00a0a notoriously sycophantic version of the company\u2019s flagship model that it ended up <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-gpt-4o-deaths\" rel=\"nofollow noopener\" target=\"_blank\">pulling down after an outcry<\/a> (and one failed earlier attempt to take it off the market.)<\/p>\n<p class=\"article-paragraph skip\">But, the researchers warned, there simply wasn\u2019t enough data to make any sweeping conclusions about the safety of one AI model over another. And the supposedly-colder GPT-5, for example, continued \u201cto exhibit sycophancy and delusions.\u201d So based on the data the researchers did have, in other words, AI delusions aren\u2019t an issue relegated to one specific chatbot.<\/p>\n<p class=\"article-paragraph skip\">As <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\" rel=\"nofollow noopener\" target=\"_blank\">Futurism<\/a> and <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">others<\/a> have <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/ai-spiritual-delusions-destroying-human-relationships-1235330175\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">extensively reported<\/a>, AI-tied delusional spirals and episodes of psychosis have resulted in <a href=\"https:\/\/futurism.com\/chatgpt-marriages-divorces\" rel=\"nofollow noopener\" target=\"_blank\">divorce and the dissolution of families<\/a>; <a href=\"https:\/\/futurism.com\/artificial-intelligence\/meta-ai-glasses-desert-aliens\" rel=\"nofollow noopener\" target=\"_blank\">job loss and financial ruin<\/a>; repeated hospitalizations; jail time; and a <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatgpt-suicides-lawsuits\" rel=\"nofollow noopener\" target=\"_blank\">climbing number<\/a> of <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatgpt-suicide-openai-gpt4o\" rel=\"nofollow noopener\" target=\"_blank\">deaths by suicide<\/a>. And AI-fueled mental health crises have also been connected to harm and violence against others, too, as unhealthy chatbot use has been <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/chatgpt-ai-cyberstalking-social-media-1235496884\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">repeatedly<\/a> linked to <a href=\"https:\/\/futurism.com\/artificial-intelligence\/ai-abuse-harassment-stalking\" rel=\"nofollow noopener\" target=\"_blank\">stalking, domestic abuse<\/a>, <a href=\"https:\/\/www.wsj.com\/tech\/ai\/gemini-ai-wrongful-death-lawsuit-cc46c5f7\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">attempted murder<\/a>, and at least one <a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?mod=author_content_page_1_pos_2\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">murder-suicide<\/a>.<\/p>\n<p class=\"article-paragraph skip\">The study adds to a body of evidence to support the growing consensus that chatbots can indeed fuel mental health crises that result in real-world harm to users \u2014 and, sometimes, even those around them.<\/p>\n<p class=\"article-paragraph skip\">More on AI delusions: <a href=\"https:\/\/futurism.com\/artificial-intelligence\/ai-abuse-harassment-stalking\" rel=\"nofollow noopener\" target=\"_blank\">AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Sign up to see the future, today Can\u2019t-miss innovations from the bleeding edge of science and tech An&hellip;\n","protected":false},"author":2,"featured_media":550832,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,51121,276,277,49,48,61],"class_list":{"0":"post-550831","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-ai-psychosis","10":"tag-artificial-intelligence","11":"tag-artificialintelligence","12":"tag-ca","13":"tag-canada","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/550831","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=550831"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/550831\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/550832"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=550831"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=550831"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=550831"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}