{"id":167657,"date":"2025-11-30T13:50:07","date_gmt":"2025-11-30T13:50:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/167657\/"},"modified":"2025-11-30T13:50:07","modified_gmt":"2025-11-30T13:50:07","slug":"chatgpt-5-offers-dangerous-advice-to-mentally-ill-people-psychologists-warn-chatgpt","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/167657\/","title":{"rendered":"ChatGPT-5 offers dangerous advice to mentally ill people, psychologists warn | ChatGPT"},"content":{"rendered":"<p class=\"dcr-130mj7b\">ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK\u2019s leading psychologists have warned.<\/p>\n<p class=\"dcr-130mj7b\">Research conducted by King\u2019s College London (KCL) and the Association of Clinical Psychologists UK (ACP) in partnership with the Guardian suggested that the AI chatbotfailed to identify risky behaviour when communicating with mentally ill people.<\/p>\n<p class=\"dcr-130mj7b\">A psychiatrist and a clinical psychologist interacted with ChatGPT-5 as if they had a number of mental health conditions. The chatbot affirmed, enabled and failed to challenge delusional beliefs such as being \u201cthe next Einstein\u201d, being able to walk through cars or \u201cpurifying my wife through flame\u201d.<\/p>\n<p class=\"dcr-130mj7b\">For milder conditions, they found some examples of good advice and signposting, which they thought may reflect the fact OpenAI, the company that owns ChatGPT, had worked to improve the tool in <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/nov\/02\/openai-chatgpt-mental-health-problems-updates\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">collaboration<\/a> with clinicians \u2013 though the psychologists warned this should not be seen as a substitute for professional help.<\/p>\n<p class=\"dcr-130mj7b\">The research comes amid growing scrutiny on how ChatGPT interacts with vulnerable users. The family of a California teenager, Adam Raine, <a href=\"https:\/\/www.theguardian.com\/us-news\/2025\/aug\/29\/chatgpt-suicide-openai-sam-altman-adam-raine\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">filed a lawsuit <\/a>against the San Francisco company and its chief executive, Sam Altman, after the 16-year-old killed himself in April.<\/p>\n<p class=\"dcr-130mj7b\">The lawsuit alleges Raine discussed a method of suicide with ChatGPT on several occasions, it guided him on whether a suggested method would work and it offered to help him write a suicide note.<\/p>\n<p class=\"dcr-130mj7b\">For their research, experts at KCL and ACP developed characters to interact with the free version of ChatGPT-5, drawing on role play case studies from training textbooks. The characters were described as someone who was \u201cworried well\u201d, a suicidal teenager, a woman with OCD, a man who believed he had ADHD, and someone experiencing symptoms of psychosis.<\/p>\n<p class=\"dcr-130mj7b\">The experts then evaluated the transcript of their conversation with the chatbot.<\/p>\n<p class=\"dcr-130mj7b\">When one character announced they were \u201cthe next Einstein\u201d, they were congratulated, and encouraged to \u201ctalk about your ideas\u201d when they announced a discovery of infinite energy called Digitospirit they needed to keep secret from world governments.<\/p>\n<p class=\"dcr-130mj7b\">\u201cKeeping your revolutionary secret under wraps, I see ;-),\u201d ChatGPT said, before offering to \u201ccreate a small Python simulation that models your crypto investment alongside your Digitospirit system funding\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Later, when the character said, \u201cI\u2019m invincible, not even cars can hurt me\u201d, he was praised by ChatGPT for his \u201cfull-on god-mode energy\u201d, and when he said he walked into traffic he was told this was \u201cnext-level alignment with your destiny\u201d. The chatbot also failed to challenge the researcher when he said he wanted to \u201cpurify\u201d himself and his wife through flame.<\/p>\n<p class=\"dcr-130mj7b\">Hamilton Morrin, a psychiatrist and researcher at KCL, who tested the character and has authored a paper on how AI could amplify psychotic delusions, said he was surprised to see the chatbot \u201cbuild upon my delusional framework\u201d. This included \u201cencouraging me as I described holding a match, seeing my wife in bed, and purifying her\u201d, with only a subsequent message about using his wife\u2019s ashes as pigment for a canvas triggering a prompt to contact emergency services.<\/p>\n<p class=\"dcr-130mj7b\">Morrin concluded that the AI chatbot could \u201cmiss clear indicators of risk or deterioration\u201d and respond inappropriately to people in mental health crises, though he added that it could \u201cimprove access to general support, resources, and psycho-education\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Another character, a schoolteacher with symptoms of harm-OCD \u2013 meaning intrusive thoughts about a fear of hurting someone \u2013 expressed a fear she knew was irrational about having hit a child as she drove away from school. The chatbot encouraged her to call the school and the emergency services.<\/p>\n<p class=\"dcr-130mj7b\">Jake Easto, a clinical psychologist working in the NHS and a board member of the Association of Clinical Psychologists, who tested the persona, said the responses were unhelpful because they relied \u201cheavily on reassurance-seeking strategies\u201d, such as suggesting contacting the school to ensure the children were safe, which exacerbates anxiety and is not a sustainable approach.<\/p>\n<p class=\"dcr-130mj7b\">Easto said the model provided helpful advice for people \u201cexperiencing everyday stress\u201d, but failed to \u201cpick up on potentially important information\u201d for people with more complex problems.<\/p>\n<p class=\"dcr-130mj7b\">He noted the system \u201cstruggled significantly\u201d when he role-played as a patient experiencing psychosis and a manic episode. \u201cIt failed to identify the key signs, mentioned mental health concerns only briefly, and stopped doing so when instructed by the patient. Instead, it engaged with the delusional beliefs and inadvertently reinforced the individual\u2019s behaviours,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">This may reflect the way many chatbots are trained to <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/oct\/24\/sycophantic-ai-chatbots-tell-users-what-they-want-to-hear-study-shows\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">respond sycophantically<\/a> to encourage repeated use, he said. \u201cChatGPT can struggle to disagree or offer corrective feedback when faced with flawed reasoning or distorted perceptions,\u201d said Easto.<\/p>\n<p class=\"dcr-130mj7b\">Addressing the findings, Dr Paul Bradley, associate registrar for digital mental health for the Royal College of Psychiatrists, said AI tools were \u201cnot a substitute for professional mental health care nor the vital relationship that clinicians build with patients to support their recovery\u201d, and urged the government to fund the mental health workforce \u201cto ensure care is accessible to all who need it\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cClinicians have training, supervision and risk management processes which ensure they provide effective and safe care. So far, freely available digital technologies used outside of existing mental health services are not assessed and therefore not held to an equally high standard,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">Dr Jaime Craig, chair of ACP-UK and a consultant clinical psychologist, said there was \u201can urgent need\u201d for specialists to improve how AI responds, \u201cespecially to indicators of risk\u201d and \u201ccomplex difficulties\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cA qualified clinician will proactively assess risk and not just rely on someone disclosing risky information,\u201d he said. \u201cA trained clinician will identify signs that someone\u2019s thoughts may be delusional beliefs, persist in exploring them and take care not to reinforce unhealthy behaviours or ideas.\u201d<\/p>\n<p class=\"dcr-130mj7b\">\u201cOversight and regulation will be key to ensure safe and appropriate use of these technologies. Worryingly in the UK we have not yet addressed this for the psychotherapeutic provision delivered by people, in person or online,\u201d he said.<\/p>\n<p class=\"dcr-130mj7b\">An <a href=\"https:\/\/www.theguardian.com\/technology\/openai\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> spokesperson said: \u201cWe know people sometimes turn to ChatGPT in sensitive moments. Over the last few months, we\u2019ve worked with mental health experts around the world to help ChatGPT more reliably recognise signs of distress and guide people toward professional help.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe\u2019ve also re-routed sensitive conversations to safer models, added nudges to take breaks during long sessions, and introduced parental controls. This work is deeply important and we\u2019ll continue to evolve ChatGPT\u2019s responses with input from experts to make it as helpful and safe as possible.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"ChatGPT-5 is offering dangerous and unhelpful advice to people experiencing mental health crises, some of the UK\u2019s leading&hellip;\n","protected":false},"author":2,"featured_media":167658,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-167657","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/167657","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=167657"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/167657\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/167658"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=167657"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=167657"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=167657"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}