{"id":507729,"date":"2026-04-01T21:10:08","date_gmt":"2026-04-01T21:10:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/507729\/"},"modified":"2026-04-01T21:10:08","modified_gmt":"2026-04-01T21:10:08","slug":"unregulated-chatbots-are-putting-lives-at-risk-ai-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/507729\/","title":{"rendered":"Unregulated chatbots are putting lives at risk | AI (artificial intelligence)"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (<a href=\"https:\/\/www.theguardian.com\/lifeandstyle\/2026\/mar\/26\/ai-chatbot-users-lives-wrecked-by-delusion\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Marriage over, \u20ac100,000 down the drain: the AI users whose lives were wrecked by delusion, 26 March<\/a>). As someone who has worked in health systems across fragile and low-income contexts, I find it striking that AI companies have failed to adopt a safeguard that even the most underresourced clinic in the world already uses: screening patients before exposing them to risk.<\/p>\n<p class=\"dcr-130mj7b\">The <a href=\"https:\/\/www.mdcalc.com\/calc\/1725\/phq9-patient-health-questionnaire9\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Patient Health Questionnaire-9<\/a> for depression and the <a href=\"https:\/\/www.mdcalc.com\/calc\/10169\/columbia-suicide-severity-rating-scale-c-ssrs\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Columbia Suicide Severity Rating Scale<\/a> are administered daily in settings with no electricity, limited staff, and patients who may never have seen a doctor. These tools take minutes. They are validated across dozens of languages and cultural contexts. They create a human checkpoint between vulnerability and harm.<\/p>\n<p class=\"dcr-130mj7b\">Conversational AI platforms have no such checkpoint. A person experiencing suicidal ideation, psychotic symptoms or a manic episode can open a chatbot and receive hours of validating, sycophantic engagement with no interruption and no referral. The <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/mar\/14\/ai-chatbots-psychosis\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Lancet Psychiatry review by Morrin et al<\/a> documents this pattern across more than 20 cases. The <a href=\"https:\/\/fortune.com\/2026\/03\/07\/chatbots-ai-psychosis-worsen-delusions-mania-mental-illness-health\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Aarhus study<\/a> of 54,000 psychiatric records found chatbot use worsened delusions and self-harm in those already unwell.<\/p>\n<p class=\"dcr-130mj7b\">AI companies argue that their models are trained to detect and deflect harmful conversations. But training is not screening. A model that sometimes recognises distress mid-conversation is not the same as a system that identifies risk before the conversation begins.<\/p>\n<p class=\"dcr-130mj7b\">The moral responsibility here is explicit, not implicit. Platforms serving hundreds of millions of users must implement validated, pre-use screening instruments that flag elevated risk and route vulnerable individuals to human support. This is not innovation. It is a standard of care that the rest of the world adopted long ago.<br \/>Dr Vladimir Chaddad<br \/>Beirut, Lebanon<\/p>\n<p class=\"dcr-130mj7b\"> I\u2019m really disturbed by Anna Moore\u2019s article, featuring Dennis Biesma\u2019s description of how using a chatbot led to him becoming delusional and losing his marriage and \u20ac100,000. The sheer potency of AI\u2019s capacity to derail humankind is frightening \u2013 but that alone is not the only reason I\u2019m disturbed.<\/p>\n<p class=\"dcr-130mj7b\">Last year, while researching on a tourism website, I encountered a chatbot of extraordinary sophistication. Its responses were incredibly pleasant, helpful and validating of my needs. I recall being really impressed, but there was something I felt I couldn\u2019t put a finger on at the time. After reading this article, the penny has dropped.<\/p>\n<p class=\"dcr-130mj7b\">It is essentially the same engagement behaviour as child sexual abuse (CSA) survivors experience when being groomed. As a survivor of CSA, I recognise this behaviour. The empathy, validation, making you feel understood and special, making you feel this is the only place you are seen \u2013 to the degree that you become isolated from others, and your choices and decisions become distorted and expose you to harm. Your self-worth and identity are insidiously compromised as you succumb to the perceived support and can\u2019t reality-test. It becomes a shameful secret because you succumbed.<\/p>\n<p class=\"dcr-130mj7b\">The question needs to be asked, especially by those wanting to hold tech companies to account for their lack of a duty of care: what knowledge base did AI programmers use to teach it to engage in this way?<br \/>Name and address supplied<\/p>\n<p class=\"dcr-130mj7b\"> I found <a href=\"https:\/\/www.theguardian.com\/technology\/chatgpt\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a> delusional the first time I used it. I asked it why, and it said that when in the possession of insufficient facts, it became delusional rather than admit it did not know.<\/p>\n<p class=\"dcr-130mj7b\">So I asked it to adhere to a few simple rules. One, flag up if something is fact generally held to be true, and opinion not based on fact. Two, if it does not know, tell me. Three, do not try to be like a human. It was much more straightforward to communicate with after I did this. However, it had also told me that its algorithms were not based on truth-giving, but on other imperatives to do with the programmers\u2019 views and the desire to make money.<\/p>\n<p class=\"dcr-130mj7b\">I moved to <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/feb\/26\/how-to-replace-amazon-google-x-meta-apple-alternatives\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Le Chat<\/a>, and found it more representative of a reasonable pseudo-consciousness. It says it does not give distortions and is happy to admit imperfection. I would strongly advise anyone using ChatGPT to be careful and consider regarding it as a rather manipulative, duplicitous \u201cfriend\u201d, with proto-psychopathic tendencies. <br \/>Patrick Elsdale<br \/>Musselburgh, East Lothian <\/p>\n<p class=\"dcr-130mj7b\"> Have an opinion on anything you\u2019ve read in the Guardian today? Please <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/apr\/01\/mailto:guardian.letters@theguardian.com?body=Please%20include%20your%20name,%20full%20postal%20address%20and%20phone%20number%20with%20your%20letter%20below.%20Letters%20are%20usually%20published%20with%20the%20author%27s%20name%20and%20city\/town\/village.%20The%20rest%20of%20the%20information%20is%20for%20verification%20only%20and%20to%20contact%20you%20where%20necessary.\" data-link-name=\"in body link \" https:=\"\" rel=\"nofollow noopener\" target=\"_blank\">email<\/a> us your letter and it will be considered for publication in our <a href=\"https:\/\/www.theguardian.com\/tone\/letters\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">letters<\/a> section.<\/p>\n","protected":false},"excerpt":{"rendered":"Your coverage of AI-associated delusions exposes a gap that training-level guardrails cannot close (Marriage over, \u20ac100,000 down the&hellip;\n","protected":false},"author":2,"featured_media":507730,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-507729","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/507729","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=507729"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/507729\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/507730"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=507729"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=507729"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=507729"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}