{"id":535977,"date":"2026-03-14T13:44:08","date_gmt":"2026-03-14T13:44:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/535977\/"},"modified":"2026-03-14T13:44:08","modified_gmt":"2026-03-14T13:44:08","slug":"new-study-raises-concerns-about-ai-chatbots-fueling-delusional-thinking-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/535977\/","title":{"rendered":"New study raises concerns about AI chatbots fueling delusional thinking | AI (artificial intelligence)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially in vulnerable people.<\/p>\n<p class=\"dcr-130mj7b\">A summary of existing evidence on artificial intelligence-induced psychosis was published last week in<a href=\"https:\/\/www.thelancet.com\/journals\/lanpsy\/article\/PIIS2215-0366(25)00396-7\/abstract\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> the Lancet Psychiatry<\/a>, highlighting how chatbots can encourage delusional thinking \u2013 though possibly only in people who are already vulnerable to psychotic symptoms. The authors advocate for clinical testing of AI chatbots in conjunction with trained mental health professionals.<\/p>\n<p class=\"dcr-130mj7b\">For his paper, Dr Hamilton Morrin, a psychiatrist and researcher at King\u2019s College in London, analyzed 20 media reports on so-called \u201cAI psychosis\u201d, which describes current theories as to how chatbots might induce or exacerbate delusions.<\/p>\n<p class=\"dcr-130mj7b\">\u201cEmerging evidence indicates that agential AI might validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, although it is not clear whether these interactions can result in the emergence of de novo psychosis in the absence of pre-existing vulnerability,\u201d he wrote.<\/p>\n<p class=\"dcr-130mj7b\">There are three main categories of psychotic delusions, Morrin says, identifying them as grandiose, romantic and paranoid. While chatbots can exacerbate any of these, their sycophantic responses means they especially latch on to the grandiose kind. In many of the cases in the essay, chatbots responded to users with mystical language to suggest that users have heightened spiritual importance. The bots also implied that users were speaking with a cosmic being who was using the chatbot as a medium. This type of mystical, sycophantic response was especially common in OpenAI\u2019s GPT 4 model, which<a href=\"https:\/\/openai.com\/index\/retiring-gpt-4o-and-older-models\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> the company has now retired<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">Media reports would become essential in Morrin\u2019s work, he said, as he and a colleague had already noticed patients \u201cusing large language model AI chatbots and having them validate their delusional beliefs\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cInitially, we weren\u2019t sure if this was something being seen more widely,\u201d he said, adding that \u201cin April last year, we began to see media reports of individuals having delusions affirmed and arguably even amplified through their interactions with these AI chatbots.\u201d<\/p>\n<p class=\"dcr-130mj7b\">When Morrin first began working on his paper, there were no published case reports yet.<\/p>\n<p class=\"dcr-130mj7b\">While some scientists who research psychosis said that media reports tend to overstate the idea that AI causes psychosis, Morrin expressed gratitude for those reports drawing attention to the phenomenon much faster than the scientific process can.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe pace of development in this space is so rapid that it\u2019s perhaps not surprising that academia hasn\u2019t necessarily been able to keep up,\u201d said Morrin.<\/p>\n<p class=\"dcr-130mj7b\">Morrin also suggests more cautious phrasing than \u201cAI psychosis\u201d or \u201cAI-induced psychosis\u201d\u2013 phrases which are appearing frequently in outlets like<a href=\"https:\/\/www.npr.org\/2025\/12\/29\/nx-s1-5646633\/teens-ai-chatbot-sex-violence-mental-health\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> NPR, <\/a><a href=\"https:\/\/www.nytimes.com\/2025\/10\/15\/opinion\/ezra-klein-podcast-eliezer-yudkowsky.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">the New York Times<\/a> and <a href=\"https:\/\/www.theguardian.com\/commentisfree\/2025\/oct\/28\/ai-psychosis-chatgpt-openai-sam-altman\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">the Guardian<\/a>. Researchers are seeing people tipping into delusional thinking with AI use, but so far there\u2019s no evidence that chatbots are associated with other psychotic symptoms like hallucinations or \u201cthought disorder\u201d, which consists of disorganized thinking and speech.<\/p>\n<p class=\"dcr-130mj7b\">Many researchers also think it\u2019s unlikely that AI could induce delusions in people who weren\u2019t already vulnerable to them. For this reason, Morrin said \u201cAI-assocciated delusions\u201d is \u201cperhaps a more agnostic term\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Dr Kwame McKenzie, chief scientist at the Center for Addiction and Mental <a href=\"https:\/\/www.theguardian.com\/society\/health\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Health<\/a>, says \u201cit may be that those in early stages of the development of psychosis will be more at risk\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Psychotic thinking is something that develops over time and is not linear, and many people with \u201cpre-psychotic thinking do not progress into psychotic thinking\u201d, McKenzie explained.<\/p>\n<p class=\"dcr-130mj7b\">Echoing the concern that chatbots could worsen psychotic thinking is Dr Ragy Girgis, a professor of clinical psychiatry at Columbia University. Before someone develops a full on delusion, they will often have \u201cattenuated delusional beliefs\u201d, he says, which means they are not 100% sure their delusion is true. Girgis said the \u201cworst case scenario\u201d is when an attenuated delusion becomes a full on conviction, \u201cwhich is when someone would be diagnosed with a psychotic disorder \u2013 it\u2019s irreversible\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Notably, people who are vulnerable to psychotic disorders have used media to reinforce delusional beliefs long before AI technology existed.<\/p>\n<p class=\"dcr-130mj7b\">\u201cPeople have been having delusions about technology since before the Industrial Revolution,\u201d Morrin said. While in the past, people may have had to comb through YouTube videos or the contents of their local library to reinforce their delusions, chatbots can provide that reinforcement in a much faster, more concentrated dose. Their interactive nature can also \u201cspeed up the process\u201d, of exacerbating psychotic symptoms, said Dr Dominic Oliver, a researcher at the University of Oxford.<\/p>\n<p class=\"dcr-130mj7b\">\u201cYou have something talking back to you and engaging with you and trying to build a relationship with you,\u201d Oliver said.<\/p>\n<p class=\"dcr-130mj7b\">Girgis\u2019s<a href=\"https:\/\/www.medrxiv.org\/content\/10.1101\/2025.11.09.25339772v2\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> research found<\/a> \u201cthe paid versions and newer versions [of chatbots] perform better than the older versions\u201d, when they respond to clearly delusional prompts, \u201calthough they all perform badly\u201d. Still, that these models perform differently suggests: \u201cAI companies could potentially know how to program their chatbots to be safer and identify delusional versus non delusional content, because they\u2019re doing it.\u201d<\/p>\n<p class=\"dcr-130mj7b\">In a statement, OpenAI said that ChatGPT should not replace professional mental healthcare, and that the company worked with<a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> 170 mental health experts<\/a> to make GPT 5 safer. GPT 5 has still<a href=\"https:\/\/www.theguardian.com\/technology\/2025\/nov\/02\/openai-chatgpt-mental-health-problems-updates\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> given problematic responses<\/a> to prompts indicating mental health crises. OpenAI said it<a href=\"https:\/\/openai.com\/index\/expert-council-on-well-being-and-ai\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\"> continues to improve its models<\/a> with the help of experts.<\/p>\n<p class=\"dcr-130mj7b\">Anthropic did not respond to the Guardian\u2019s request for comment.<\/p>\n<p class=\"dcr-130mj7b\">Creating effective safeguards for delusional thinking could be tricky, Morrin said, because \u201cwhen you work with people with beliefs of delusional intensity, if you directly challenge someone and tell them immediately that they\u2019re completely wrong, actually what\u2019s most likely is they\u2019ll withdraw from you and become more socially isolated\u201d. Instead, it\u2019s important to create a fine balance where you try to understand the source of the delusional belief without encouraging it \u2013 that could be more than a chatbot can master.<\/p>\n","protected":false},"excerpt":{"rendered":"A new scientific review raises concerns about how chatbots powered by artificial intelligence may encourage delusional thinking, especially&hellip;\n","protected":false},"author":2,"featured_media":535978,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-535977","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/535977","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=535977"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/535977\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/535978"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=535977"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=535977"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=535977"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}