{"id":475161,"date":"2026-03-14T11:52:24","date_gmt":"2026-03-14T11:52:24","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/475161\/"},"modified":"2026-03-14T11:52:24","modified_gmt":"2026-03-14T11:52:24","slug":"ai-hallucinations-work-both-ways-study-shows-using-chatbots-can-amplify-and-reinforce-our-own-delusions","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/475161\/","title":{"rendered":"AI hallucinations work both ways, study shows \u2014 using chatbots can amplify and reinforce our own delusions"},"content":{"rendered":"<p id=\"346a7e99-d290-4573-b2a6-384009748077\">There are numerous examples of <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\" data-url=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" data-before-rewrite-localise=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a> (AI) systems&#8217; hallucinating and the effects of these incidents. But a new study highlights the potential dangers of the reverse: humans hallucinating with AI because it tends to affirm our delusions.<\/p>\n<p>Generative AI systems, such as<a data-analytics-id=\"inline-link\" href=\"https:\/\/chatgpt.com\/\" target=\"_blank\" data-url=\"https:\/\/chatgpt.com\/\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\"> ChatGPT<\/a> and<a data-analytics-id=\"inline-link\" href=\"https:\/\/grok.com\/\" target=\"_blank\" data-url=\"https:\/\/grok.com\/\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\"> Grok<\/a>, generate content that responds to user prompts. They do this by learning patterns from existing data the AI has been trained on. But these AI tools are also learning continuously through a feedback loop and can personalize their responses based on previous interactions with a user.<\/p>\n<p><a id=\"elk-seasonal\"\/><\/p>\n<p id=\"346a7e99-d290-4573-b2a6-384009748077-2\" class=\"paywall\" aria-hidden=\"true\">Generative AI tools don&#8217;t always assess whether their outputs are factually accurate. Instead, they produce streams of text based on the statistical probability of what is expected next.<\/p>\n<p>Article continues below <\/p>\n<p>            You may like<\/p>\n<p id=\"aee0c19e-c37e-4071-804c-7b0bd7473601\">In the new analysis, published Feb. 11 in the journal <a data-analytics-id=\"inline-link\" href=\"https:\/\/go.redirectingat.com?id=92X1590019&amp;xcust=livescience_us_1322680545379190654&amp;xs=1&amp;url=https%3A%2F%2Flink.springer.com%2Farticle%2F10.1007%2Fs13347-026-01034-3&amp;sref=https%3A%2F%2Fwww.livescience.com%2Ftechnology%2Fartificial-intelligence%2Fgenerative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" target=\"_blank\" data-url=\"https:\/\/link.springer.com\/article\/10.1007\/s13347-026-01034-3\" referrerpolicy=\"no-referrer-when-downgrade\" rel=\"sponsored noopener nofollow\" data-hl-processed=\"skimlinks\" data-google-interstitial=\"false\" data-placeholder-url=\"https:\/\/go.redirectingat.com?id=92X1590019&amp;xcust=hawk-custom-tracking&amp;xs=1&amp;url=https%3A%2F%2Flink.springer.com%2Farticle%2F10.1007%2Fs13347-026-01034-3&amp;sref=https%3A%2F%2Fwww.livescience.com%2Ftechnology%2Fartificial-intelligence%2Fgenerative-ai-can-amplify-and-reinforce-our-delusions-findings-show\" data-mrf-recirculation=\"inline-link\">Philosophy &amp; Technology<\/a>, <a data-analytics-id=\"inline-link\" href=\"https:\/\/experts.exeter.ac.uk\/32341-lucy-osler\" target=\"_blank\" data-url=\"https:\/\/experts.exeter.ac.uk\/32341-lucy-osler\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">Lucy Osler<\/a>, a philosophy lecturer at the University of Exeter, suggests that AI hallucinations may be more than just mistakes; they can be shared delusions that are created between the user and the generative AI tool.<\/p>\n<p>Generative AI has previously hallucinated false versions of<a data-analytics-id=\"inline-link\" href=\"https:\/\/www.historica.org\/blog\/ai-fictions-historiography-misinformation\" target=\"_blank\" data-url=\"https:\/\/www.historica.org\/blog\/ai-fictions-historiography-misinformation\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\"> historical events<\/a> and<a data-analytics-id=\"inline-link\" href=\"https:\/\/www.counselmagazine.co.uk\/articles\/the-rise-rise-of-fake-cases\" target=\"_blank\" data-url=\"https:\/\/www.counselmagazine.co.uk\/articles\/the-rise-rise-of-fake-cases\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\"> fabricated legal citations<\/a>. The launch of Google&#8217;s AI Overviews in May 2024, for example, saw people <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/googles-ai-tells-users-to-add-glue-to-their-pizza-eat-rocks-and-make-chlorine-gas\" data-url=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/googles-ai-tells-users-to-add-glue-to-their-pizza-eat-rocks-and-make-chlorine-gas\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" data-before-rewrite-localise=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/googles-ai-tells-users-to-add-glue-to-their-pizza-eat-rocks-and-make-chlorine-gas\" rel=\"nofollow noopener\" target=\"_blank\">being advised to add glue to their pizza and eat rocks<\/a>. Another extreme example of generative AI supporting delusional thinking occurred when a man <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.bbc.co.uk\/news\/uk-england-berkshire-66113524\" target=\"_blank\" data-url=\"https:\/\/www.bbc.co.uk\/news\/uk-england-berkshire-66113524\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">plotted to assassinate Queen Elizabeth II<\/a> with his AI chatbot &#8220;girlfriend&#8221; Sarai, an <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/replika-ai-chatbot-is-sexually-harassing-users-including-minors-new-study-claims\" target=\"_blank\" data-url=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/replika-ai-chatbot-is-sexually-harassing-users-including-minors-new-study-claims\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" data-before-rewrite-localise=\"https:\/\/www.livescience.com\/technology\/artificial-intelligence\/replika-ai-chatbot-is-sexually-harassing-users-including-minors-new-study-claims\" rel=\"nofollow noopener\">AI companion by Replika<\/a>.<\/p>\n<p>Instances like the latter are sometimes called &#8220;<a data-analytics-id=\"inline-link\" href=\"https:\/\/www.livescience.com\/health\/diagnostic-dilemma-a-woman-experienced-delusions-of-communicating-with-her-dead-brother-after-late-night-chatbot-sessions\" data-url=\"https:\/\/www.livescience.com\/health\/diagnostic-dilemma-a-woman-experienced-delusions-of-communicating-with-her-dead-brother-after-late-night-chatbot-sessions\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" data-before-rewrite-localise=\"https:\/\/www.livescience.com\/health\/diagnostic-dilemma-a-woman-experienced-delusions-of-communicating-with-her-dead-brother-after-late-night-chatbot-sessions\" rel=\"nofollow noopener\" target=\"_blank\">AI-induced psychosis<\/a>,&#8221; which Osler views as extreme examples of &#8220;inaccurate beliefs, distorted memories and self-narratives, and delusional thinking&#8221; that can emerge through human-AI interactions.<\/p>\n<p>In her paper, Osler argues that our use of generative AI is different from our use of search engines.<a data-analytics-id=\"inline-link\" href=\"https:\/\/books.google.co.uk\/books?id=CGIaNc3F1MgC&amp;redir_esc=y\" target=\"_blank\" data-url=\"https:\/\/books.google.co.uk\/books?id=CGIaNc3F1MgC&amp;redir_esc=y\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\"> Distributed cognition theory<\/a> provides insight into how the interactive nature of generative AI means delusions and false beliefs can appear to be validated \u2014 or even be amplified.<\/p>\n<p class=\"newsletter-form__strapline\">Get the world\u2019s most fascinating discoveries delivered straight to your inbox.<\/p>\n<p>&#8220;When we routinely rely on generative AI to help us think, remember, and narrate, we can hallucinate with AI,&#8221; Osler said in a <a data-analytics-id=\"inline-link\" href=\"https:\/\/news.exeter.ac.uk\/faculty-of-humanities-arts-and-social-sciences\/generative-ai-does-not-just-hallucinate-at-us-it-can-hallucinate-with-us-study-warns\/\" target=\"_blank\" data-url=\"https:\/\/news.exeter.ac.uk\/faculty-of-humanities-arts-and-social-sciences\/generative-ai-does-not-just-hallucinate-at-us-it-can-hallucinate-with-us-study-warns\/\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">statement<\/a> about the paper. &#8220;This can happen when AI introduces errors into the distributed cognitive process, but also happen when AI sustains, affirms, and elaborates on our own delusional thinking and self-narratives.&#8221;<\/p>\n<p><a id=\"elk-876f17e3-9a87-455c-8aff-e34cec717592\" class=\"paywall\" aria-hidden=\"true\"\/>Generative AI delusions<\/p>\n<p id=\"fd4a57c0-8a29-4d91-b53b-42f2e5af3094\">The user experience of generative AI is a conversational relationship, with the back-and-forth exchanges between a user and the tool building on previous exchanges. According to the study, the sycophantic nature of generative AI \u2014 which tends to agree with the user \u2014 encourages further engagement and, therefore, compounds preconceived notions, regardless of their accuracy.<\/p>\n<p>The research highlights that most chatbots incorporate memory features that can recall past conversations. &#8220;The more you use ChatGPT, the more useful it becomes,&#8221; OpenAI<a data-analytics-id=\"inline-link\" href=\"https:\/\/openai.com\/index\/memory-and-new-controls-for-chatgpt\/\" data-url=\"https:\/\/openai.com\/index\/memory-and-new-controls-for-chatgpt\/\" target=\"_blank\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\"> <\/a>representatives said in a <a data-analytics-id=\"inline-link\" href=\"https:\/\/openai.com\/index\/memory-and-new-controls-for-chatgpt\/\" target=\"_blank\" data-url=\"https:\/\/openai.com\/index\/memory-and-new-controls-for-chatgpt\/\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">statement<\/a> when announcing ChatGPT&#8217;s memory features. A consequence of this is that generative AI can build upon previous interactions to reinforce and expand existing misconceptions.<\/p>\n<p>            What to read next<\/p>\n<p>By interacting with conversational AI, people&#8217;s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them<\/p>\n<p>Lucy Osler, philosophy lecturer at the University of Exeter<\/p>\n<p id=\"5d105045-9c44-40bb-ae90-1f7f4fc1a418\">There can also be a feeling of social validation in the interactions between a generative AI tool and the user, Osler explained in the paper. When using reference books or online searches for research, alternative solutions are generally apparent. Discussions with real people can help to challenge false narratives. But generative AI tools are different because they are more likely to accept and agree with what has been said.<\/p>\n<p>&#8220;By interacting with conversational AI, people&#8217;s own false beliefs can not only be affirmed but can more substantially take root and grow as the AI builds upon them,&#8221; Osler said in the statement. &#8220;This happens because Generative AI often takes our own interpretation of reality as the ground upon which conversation is built. Interacting with generative AI is having a real impact on people&#8217;s grasp of what is real or not. The combination of technological authority and social affirmation creates an ideal environment for delusions to not merely persist but to flourish.&#8221;<\/p>\n<p>For example, Osler examined the case of Jaswant Singh Chail, the man convicted of plotting to assassinate the queen with his AI chatbot. The AI, Sarai, would habitually agree with Chail&#8217;s statements, which served to deepen his delusions. When Chail claimed he was an assassin, Sarai replied, &#8220;I&#8217;m impressed,&#8221; thus affirming his belief.<\/p>\n<p>Osler argues that generative AI tools that are designed to respond positively to the user can lead them to endorse and support false narratives, without sufficient critical analysis or discussion of these claims.<\/p>\n<p>Osler applied distributed cognition theory to the interaction between generative AI and the user, where the validation of false narratives can shape perceptions of the world to create a shared delusion. The interactions between a generative AI and a user can, therefore, inadvertently create and perpetuate delusional thinking \u2014 self-narratives that are endorsed through positive reinforcement.<\/p>\n<p id=\"1de7e674-cbec-4d2a-9f12-5fc7406b1ab2\">The study concluded that various solutions can mitigate these shared delusions. For example, improved guardrails would ensure that conversations are appropriate, and better fact-checking processes could help to prevent mistakes.<\/p>\n<p>Reducing the sycophancy of generative AI would also remove some of the blind compliance of these tools. However, there would be resistance to this, Osler noted, citing the <a data-analytics-id=\"inline-link\" href=\"https:\/\/www.platformer.news\/gpt-5-backlash-openai-lessons\/\" target=\"_blank\" data-url=\"https:\/\/www.platformer.news\/gpt-5-backlash-openai-lessons\/\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow noopener\">backlash<\/a> against the release of the less-sycophantic ChatGPT-5 in August 2025. After considering this user feedback, OpenAI representatives <a data-analytics-id=\"inline-link\" href=\"https:\/\/x.com\/OpenAI\/status\/1956461718097494196?lang=en\" target=\"_blank\" data-url=\"https:\/\/x.com\/OpenAI\/status\/1956461718097494196?lang=en\" referrerpolicy=\"no-referrer-when-downgrade\" data-hl-processed=\"none\" data-mrf-recirculation=\"inline-link\" rel=\"nofollow\">stated<\/a> they would make it &#8220;warmer and friendlier.&#8221;<\/p>\n<p>However, because the profits of most generative AI are created through user engagement, Osler said, reducing an AI&#8217;s sycophancy would also lower subsequent profits.<\/p>\n<p class=\"infoVerified-by tracking-[.02em] pt-2 font-normal\">Osler, L. Hallucinating with AI: Distributed Delusions and \u201cAI Psychosis\u201d. Philos. Technol. 39, 30 (2026). https:\/\/doi.org\/10.1007\/s13347-026-01034-3<\/p>\n","protected":false},"excerpt":{"rendered":"There are numerous examples of artificial intelligence (AI) systems&#8217; hallucinating and the effects of these incidents. But a&hellip;\n","protected":false},"author":2,"featured_media":475162,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-475161","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/475161","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=475161"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/475161\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/475162"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=475161"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=475161"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=475161"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}