{"id":506063,"date":"2026-02-28T21:55:11","date_gmt":"2026-02-28T21:55:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/506063\/"},"modified":"2026-02-28T21:55:11","modified_gmt":"2026-02-28T21:55:11","slug":"chatbot-use-can-cause-mental-illness-to-get-worse-research-finds","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/506063\/","title":{"rendered":"Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds"},"content":{"rendered":"<p class=\"article-paragraph skip\">Sign up to see the future, today<\/p>\n<p class=\"article-paragraph skip\">Can\u2019t-miss innovations from the bleeding edge of science and tech<\/p>\n<p class=\"pw-incontent-excluded article-paragraph skip\">A new study found that chatbot use appeared to worsen symptoms of mental illness in people struggling with an array of conditions, adding to a <a href=\"https:\/\/futurism.com\/artificial-intelligence\/doctors-link-ai-psychosis\" rel=\"nofollow noopener\" target=\"_blank\">rising consensus<\/a> among medical experts that interacting with unregulated chatbots might steer some users into crisis.<\/p>\n<p class=\"article-paragraph skip\">The <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/acps.70068\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">research<\/a>, conducted by a team of psychiatrists at Denmark\u2019s Aarhus University and published earlier this month in the journal Acta Psychiatrica Scandinavica, analyzed digital health records from roughly 54,000 Danish patients with diagnosed mental illnesses. After identifying 181 instances of patient notes containing mentions of AI chatbots, they determined that use of the bots \u2014\u00a0particularly intensive, prolonged use \u2014\u00a0appeared to deepen symptoms of mental illness in dozens of patients. They found that this pattern seemed to be especially true for patients prone to delusions or mania,\u00a0and that the risks of chatbot use may be \u201csevere or even fatal\u201d for some.<\/p>\n<p class=\"article-paragraph skip\">This latest study was led by Dr. S\u00f8ren Dinesen \u00d8stergaard, a Danish psychiatrist who, back in August 2023, <a href=\"https:\/\/academic.oup.com\/schizophreniabulletin\/article\/49\/6\/1418\/7251361\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">predicted that<\/a> human-like chatbots like ChatGPT could stand to reinforce delusions and hallucinations in people \u201cprone to psychosis.\u201d In a <a href=\"https:\/\/health.au.dk\/en\/display\/artikel\/ny-forskning-ai-chatbots-kan-formentlig-forvaerre-psykisk-sygdom\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">press release<\/a>, \u00d8stergaard urged that while more research into causality\u00a0is needed, he \u201cwould argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cI would urge caution here,\u201d said \u00d8stergaard.<\/p>\n<p class=\"article-paragraph skip\">Though limited to Denmark, the study\u2019s findings <a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\" rel=\"nofollow noopener\" target=\"_blank\">add to a wave<\/a> of <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/chatgpt-ai-cyberstalking-social-media-1235496884\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">public<\/a> <a href=\"https:\/\/www.nytimes.com\/2025\/08\/08\/technology\/ai-chatbots-delusions-chatgpt.html\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">reporting<\/a> and <a href=\"https:\/\/innovationscns.com\/youre-not-crazy-a-case-of-new-onset-ai-associated-psychosis\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">research<\/a> about AI-linked mental health crises \u2014 sometimes referred to by mental health professionals as \u201c<a href=\"https:\/\/www.psychologytoday.com\/us\/blog\/urban-survival\/202507\/the-emerging-problem-of-ai-psychosis\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">AI psychosis<\/a>\u201d \u2014\u00a0in which bots like ChatGPT and others introduce, reinforce, or otherwise stoke delusional beliefs in users in ways that contribute to destructive mental spirals and real-world outcomes. Indeed, instead of nudging users away from delusional beliefs or potentially harmful fixations, previous studies show that <a href=\"https:\/\/futurism.com\/stanford-therapist-chatbots-encouraging-delusions\" rel=\"nofollow noopener\" target=\"_blank\">chatbots tend to reinforce them<\/a> \u2014\u00a0which is exactly what mental health professionals urge people not to do when communicating with someone who may be in crisis.<\/p>\n<p class=\"article-paragraph skip\">\u201cAI chatbots have an inherent tendency to validate the user\u2019s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one,\u201d said \u00d8stergaard, adding that intensive chatbot use \u201cappears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia.\u201d<\/p>\n<p class=\"article-paragraph skip\">The Danish study found that in addition to deepening delusional beliefs, chatbots also appeared to worsen <a href=\"https:\/\/futurism.com\/artificial-intelligence\/chatgpt-suicide-openai-gpt4o\" rel=\"nofollow noopener\" target=\"_blank\">suicidal ideation<\/a> and self-harm, disordered eating habits, depression, and obsessive or compulsive symptoms, among other symptoms of mental health issues. <\/p>\n<p class=\"article-paragraph skip\">The researchers did note that, out of the nearly 54,000 records they analyzed, they identified 32 cases in which patients\u2019 use of chatbots for therapy or companionship appeared to be \u201cconstructive,\u201d for example alleviating symptoms of loneliness or providing what patients found to be a helpful version of talk therapy. But while use of chatbots as a substitute for human therapists has proven to be an extremely common use case for chatbots, the study\u2019s authors <a href=\"https:\/\/www.theverge.com\/policy\/665685\/ai-therapy-meta-chatbot-surveillance-risks-trump\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">emphasized that AI therapy<\/a> is still completely unregulated terrain.<\/p>\n<p class=\"article-paragraph skip\">As Futurism and others have reported, delusional spirals tied to <a href=\"https:\/\/www.rollingstone.com\/culture\/culture-features\/ai-chatbot-disappearance-jon-ganz-1235438552\/\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">extensive chatbot use<\/a> \u2014\u00a0and the tangible consequences of these episodes, which range from <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\" rel=\"nofollow noopener\" target=\"_blank\">divorce<\/a> to <a href=\"https:\/\/futurism.com\/artificial-intelligence\/meta-ai-glasses-desert-aliens\" rel=\"nofollow noopener\" target=\"_blank\">job loss and financial distress<\/a>, <a href=\"https:\/\/futurism.com\/artificial-intelligence\/mental-illness-chatgpt-psychosis-lawsuit\" rel=\"nofollow noopener\" target=\"_blank\">self-harm<\/a>, <a href=\"https:\/\/futurism.com\/artificial-intelligence\/ai-abuse-harassment-stalking\" rel=\"nofollow noopener\" target=\"_blank\">stalking and harassment<\/a>, <a href=\"https:\/\/futurism.com\/commitment-jail-chatgpt-psychosis\" rel=\"nofollow noopener\" target=\"_blank\">hospitalization and jailing<\/a>, and <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">even death<\/a> \u2014\u00a0have impacted people with known histories of serious mental illnesses and as well as those with no such background. The New York Times <a href=\"https:\/\/www.nytimes.com\/2026\/01\/26\/us\/chatgpt-delusions-psychosis.html\" rel=\"noreferrer nofollow noopener\" target=\"_blank\">recently interviewed<\/a> dozens of mental health professionals who reported that AI delusions are increasingly showing up in their practice.<\/p>\n<p class=\"article-paragraph skip\">OpenAI, meanwhile, is <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-gpt-4o-deaths\" rel=\"nofollow noopener\" target=\"_blank\">facing over a dozen lawsuits<\/a> related to user safety and the possible psychological impacts of extensive ChatGPT use. One plaintiff, 34-year-old California man named John Jacquez, had been diagnosed with schizoaffective disorder \u2014\u00a0a condition that he worked to manage for years until ChatGPT sent him spiraling into a devastating psychosis, he claims in his lawsuit. In an interview, Jacquez <a href=\"https:\/\/futurism.com\/artificial-intelligence\/mental-illness-chatgpt-psychosis-lawsuit\" rel=\"nofollow noopener\" target=\"_blank\">told Futurism<\/a> that had he been warned that ChatGPT could reinforce delusional thinking, he \u201cnever would\u2019ve touched the program.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cI didn\u2019t see any warnings that it could be negative to mental health,\u201d said Jacquez.<\/p>\n<p class=\"article-paragraph skip\">\u201cI fear the problem is more common than most people think,\u201d said \u00d8stergaard. \u201cIn our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records.\u201d<\/p>\n<p class=\"article-paragraph skip\">\u201cThere are likely far more,\u201d he added, \u201cthat have gone undetected.\u201d<\/p>\n<p class=\"article-paragraph skip\">More on AI delusions: <a href=\"https:\/\/futurism.com\/artificial-intelligence\/ai-abuse-harassment-stalking\" rel=\"nofollow noopener\" target=\"_blank\">AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Sign up to see the future, today Can\u2019t-miss innovations from the bleeding edge of science and tech A&hellip;\n","protected":false},"author":2,"featured_media":506064,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-506063","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/506063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=506063"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/506063\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/506064"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=506063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=506063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=506063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}