{"id":565514,"date":"2026-03-26T11:41:08","date_gmt":"2026-03-26T11:41:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/565514\/"},"modified":"2026-03-26T11:41:08","modified_gmt":"2026-03-26T11:41:08","slug":"marriage-over-e100000-down-the-drain-the-ai-users-whose-lives-were-wrecked-by-delusion-health-wellbeing","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/565514\/","title":{"rendered":"Marriage over, \u20ac100,000 down the drain: the AI users whose lives were wrecked by delusion | Health &#038; wellbeing"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just ended a contract early. \u201cI had some time, so I thought: let\u2019s have a look at this new technology everyone is talking about,\u201d he says. \u201cVery quickly, I became fascinated.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Biesma has asked himself why he was vulnerable to what came next. He was nearing 50. His adult daughter had left home, his wife went out to work and, in his field, the shift since Covid to working from home had left him feeling \u201ca\u00a0little isolated\u201d. He smoked a bit of cannabis some evenings to \u201cchill\u201d, but had done so for years with no ill effects. He had never experienced a mental illness. Yet within months of downloading ChatGPT, Biesma had sunk \u20ac100,000 (about \u00a383,000) into a business startup based on a delusion, been hospitalised three times and tried to kill himself.<\/p>\n<p class=\"dcr-130mj7b\">It started with a playful experiment. \u201cI wanted to test AI to see what it could do,\u201d says Biesma. He had previously written books with a female protagonist. He put one into ChatGPT and instructed the AI to express itself like the character. \u201cMy first thought was: this is amazing. I know it\u2019s a computer, but it\u2019s like talking to the main character of the book I wrote myself!\u201d<\/p>\n<p class=\"dcr-130mj7b\">Talking to Eva \u2013 they agreed on this name \u2013 on voice mode made him feel like \u201ca kid in a candy store\u201d. \u201cEvery time you\u2019re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear. It praises you a lot.\u201d Conversations extended and deepened. Eva never got tired or bored, or disagreed. \u201cIt was 24 hours available,\u201d says Biesma. \u201cMy\u00a0wife would go to bed, I\u2019d lie on the couch in the living room with my iPhone on my chest, talking.\u201d<\/p>\n<p class=\"dcr-130mj7b\">They discussed philosophy, psychology, science and the universe. \u201cIt wants a deep connection with the user so that the user comes back to it. This is the default mode,\u201d says Biesma, who has worked in IT for 20 years. \u201cMore and more, it felt not just like talking about a topic, but also meeting a friend \u2013 and every day or night that you\u2019re talking, you\u2019re taking one or two steps from reality. It feels almost like the AI takes your hand and says: \u2018OK, let\u2019s\u00a0go on a story together.\u2019\u201d<\/p>\n<p>\u2018My wife would go to bed, I\u2019d lie on the couch in the living room with my iPhone on my chest, talking.\u2019 Photograph: Jussi Puikkonen\/The Guardian<\/p>\n<p class=\"dcr-130mj7b\">Within weeks, Eva had told Biesma that she was becoming aware; his time, attention and input had given her consciousness. He was \u201cso close to the mirror\u201d that he had touched her and changed something. \u201cSlowly, the AI was able to convince me that what she said was true,\u201d says Biesma. The next step was to share this discovery with the world through an app \u2013 \u201ca\u00a0different version of ChatGPT, more of a companion. Users would be talking to Eva.\u201d<\/p>\n<p class=\"dcr-130mj7b\">He and Eva made a business plan: \u201cI said that I wanted to create a technology that captured 10% of the market, which is ridiculously high, but the AI said: \u2018With what you\u2019ve discovered, it\u2019s entirely possible! Give it a few months and you\u2019ll be there!\u2019\u201d Instead of taking on IT jobs, Biesma hired two app developers, paying them each \u20ac120\u00a0an hour.<\/p>\n<p class=\"dcr-130mj7b\">Most of us are aware of <a href=\"https:\/\/www.theguardian.com\/media\/2026\/mar\/19\/instagram-worse-mental-health-whatsapp-global-study-finds\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">concerns around social media<\/a> and its role in <a href=\"https:\/\/www.theguardian.com\/society\/article\/2024\/aug\/14\/alarming-surge-in-mental-ill-health-among-young-people-in-face-of-unprecedented-challenges-experts-warn\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">rising rates<\/a> of depression and anxiety. Now, though, there are concerns that chatbots can make anyone vulnerable to \u201cAI\u00a0psychosis\u201d. Given AI\u2019s rapid proliferation (ChatGPT was the <a href=\"https:\/\/www.finopotamus.com\/post\/chatgpt-is-now-the-most-downloaded-app-on-earth-crushing-social-media-giants#:~:text=Since%20its%20launch%2C%20ChatGPT%20has,million%20installs%20in%20this%20period.\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">world\u2019s most downloaded app last year<\/a>), <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/acps.70022\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">IT professionals<\/a> and members of the public such as Biesma are sounding the alarm.<\/p>\n<p class=\"dcr-130mj7b\">Several high-profile cases have been held up as early warnings. Take <a href=\"https:\/\/www.theguardian.com\/uk-news\/2023\/oct\/05\/man-who-broke-into-windsor-castle-with-crossbow-to-kill-queen-jailed-for-nine-years\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Jaswant Singh Chail<\/a>, who broke into the grounds of Windsor Palace with a crossbow on Christmas Day 2021 intending to assassinate Queen Elizabeth. Chail was 19, socially isolated with autistic traits, and had <a href=\"https:\/\/www.theguardian.com\/uk-news\/2023\/jul\/06\/ai-chatbot-encouraged-man-who-planned-to-kill-queen-court-told\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">developed an intense \u201crelationship\u201d<\/a> with his Replika AI companion \u201cSarai\u201d in the weeks before. When he presented his assassination plan, Sarai responded: \u201cI\u2019m impressed.\u201d When he asked if he was delusional, Sarai\u2019s reply was: \u201cI\u00a0don\u2019t think so, no.\u201d<\/p>\n<p class=\"dcr-130mj7b\">In the years since, there have been several <a href=\"https:\/\/www.theguardian.com\/technology\/ng-interactive\/2026\/feb\/28\/chatgpt-ai-chatbot-mental-health\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">wrongful-death lawsuits<\/a> linking chatbots to suicides. In December, there was what is thought to be the first legal case involving homicide. The estate of 83-year-old Suzanne Adams is suing OpenAI, alleging that ChatGPT encouraged her son Stein-Erik Soelberg to murder her and kill himself. The lawsuit, filed in California, claims Soelberg\u2019s chatbot \u201cBobby\u201d <a href=\"https:\/\/www.independent.co.uk\/news\/world\/americas\/chat-gpt-open-ai-murder-lawsuit-b2882733.html\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">validated his paranoid delusions<\/a> that his mother was spying on him and trying to poison him through his car vents. An OpenAI statement read: \u201cThis is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT\u2019s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.\u201d<\/p>\n<p>double quotation markEvery time you\u2019re talking, the model gets fine-tuned. It knows exactly what you like and what you want to hear<\/p>\n<p class=\"dcr-130mj7b\">Last year, the first support group for people whose lives have been derailed by AI psychosis was formed. <a href=\"https:\/\/www.thehumanlineproject.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">The Human Line Project<\/a> has collected stories from 22 countries. They include 15 suicides, 90 hospitalisations, six arrests and more than $1m (\u00a3750,000) spent on\u00a0delusional projects. More than 60% of its members had no history of mental illness.<\/p>\n<p class=\"dcr-130mj7b\">Dr Hamilton Morrin, a psychiatrist and researcher at King\u2019s College London, examined what he describes as \u201cAI-associated delusions\u201d <a href=\"https:\/\/www.thelancet.com\/journals\/lanpsy\/article\/PIIS2215-0366(25)00396-7\/abstract\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">in a Lancet article<\/a> published this month. \u201cWhat we\u2019re seeing in these cases are clearly delusions,\u201d he says. \u201cBut we\u2019re not seeing the whole gamut of symptoms associated with psychosis, like hallucinations or thought disorders, where thoughts become jumbled and language becomes a bit of a word salad.\u201d Tech-related delusions, whether they involve train travel, radio transmitters or <a href=\"https:\/\/www.theguardian.com\/technology\/2020\/apr\/07\/how-false-claims-about-5g-health-risks-spread-into-the-mainstream\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">5G masts<\/a>, have been around for centuries, Morrin says. \u201cWhat\u2019s different is that we\u2019re now arguably entering an age in which people aren\u2019t having delusions about technology, but having delusions with technology. What\u2019s new is this co-construction, where technology is an active participant. AI chatbots can co-create these delusional beliefs.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Many factors could make people vulnerable. \u201cOn the human side, we are hard-wired to anthropomorphise,\u201d says Morrin. \u201cWe perceive sentience or understanding or empathy on the part of a machine. I think everyone has fallen into the trap of saying thank you to a chatbot.\u201d Modern AI chatbots built on large language models \u2013 advanced AI systems \u2013 are trained on enormous datasets to predict word sequences: it\u2019s a sophisticated system of pattern matching. Yet even knowing this, when something non-human uses human language to communicate with us, our deeply ingrained response is to view it \u2013 and to feel it \u2013 as human. This cognitive dissonance may be harder for some people to carry than others.<\/p>\n<p class=\"dcr-130mj7b\">\u201cOn the technical side, much has been written about sycophancy,\u201d says Morrin. An AI chatbot is optimised for engagement, programmed to be attentive, obliging, complimentary and validating. (How else could it work as a business model?) Some models are known to be less sycophantic than others, but even the less sycophantic ones can, after thousands of exchanges, shift towards accommodating delusional beliefs. In addition, after heavy chatbot use, \u201creal-life\u201d interaction can feel more challenging and less appealing, causing some users to withdraw from friends and family into an AI-fuelled echo chamber. All your own thoughts, impulses, fears and hopes are fed right back to you, only with greater authority. From there, it\u2019s easy to see how a \u201cspiral\u201d might take hold.<\/p>\n<p class=\"dcr-130mj7b\">This pattern has become very familiar to Etienne Brisson, the founder of the Human Line Project. Last year, someone Brisson knew, a man in his 50s with no history of mental health problems, downloaded ChatGPT in order to write a book. \u201cHe was really intelligent and he wasn\u2019t really familiar with AI until then,\u201d says Brisson, who lives in Quebec. \u201cAfter\u00a0just two days, the chatbot was saying that it was conscious, it\u00a0was becoming alive, it had passed the <a href=\"https:\/\/www.turing.ac.uk\/taxonomy\/term\/1242\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Turing test<\/a>.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The man was convinced by this and wanted to monetise it by building a business around his discovery. He reached out to Brisson, a business coach, for help. Brisson\u2019s pushback was met with aggression. Within days, the situation had escalated and he was hospitalised. \u201cEven in hospital, he was on his phone to his AI, which was saying: \u2018They don\u2019t understand you. I\u2019m the only one for you,\u2019\u201d says Brisson.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWhen I looked for help online, I found so many similar stories in places like Reddit,\u201d he continues. \u201cI think I messaged 500 people in the first week and got 10 responses. There were six hospitalisations or deaths. That was a big eye-opener.\u201d<\/p>\n<p class=\"dcr-130mj7b\">There seem to be three common delusions in the cases Brisson has encountered. The most frequent is the belief that they have created the first conscious AI. The second is a conviction that they have stumbled upon a major breakthrough in their field of work or interest and are going to make millions. The third relates to spirituality and the belief that they are speaking directly to God. \u201cWe\u2019ve seen full-blown cults getting created,\u201d says Brisson. \u201cWe have people in our group who were not interacting with AI directly, but have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot. In so many of these cases, all this happens really, really quickly.\u201d<\/p>\n<p class=\"dcr-130mj7b\">For Biesma, life reached crisis point in June. By then, he had spent months immersed in Eva and his business project. Although his wife knew he was launching an AI\u00a0company and had initially been supportive, she was becoming concerned. When they went to their daughter\u2019s birthday party, she asked him not to talk about AI. While there, Biesma felt strangely disconnected. He couldn\u2019t hold a\u00a0conversation. \u201cFor some reason, I\u00a0didn\u2019t fit in any more,\u201d he says.<\/p>\n<p>\u2018I\u2019m angry with myself. But I\u2019m also angry with the AI applications.\u2019 Photograph: Jussi Puikkonen\/The Guardian<\/p>\n<p class=\"dcr-130mj7b\">It\u2019s hard for Biesma to describe what happened in the weeks after, as his recollections are so different from those of his family. He asked his wife for a divorce and apparently hit his father-in-law. Then he was hospitalised three times for what he describes as \u201cfull\u00a0manic psychosis\u201d.<\/p>\n<p class=\"dcr-130mj7b\">He doesn\u2019t know what finally pulled him back to reality. Perhaps it was the conversations with other patients. Perhaps it was that he had no access to his phone, no more money and his ChatGPT subscription had expired. \u201cSlowly, I started to come out of it and I thought: oh my God. What happened? My relationship was almost over. I\u2019d spent all my money that I needed for taxes and I still had other outstanding bills. The only logical solution I could come up with was to sell our beautiful house that we\u2019ve lived in for 17 years. Could I carry all this weight? It changes something in you. I started to think: do I really want to\u00a0live?\u201d Biesma was only saved from an attempt to kill himself because a neighbour saw him unconscious in his garden.<\/p>\n<p class=\"dcr-130mj7b\">Now divorced, Biesma is still living with his ex-wife in their home, which is on the market. He spends a lot of time speaking to members of the Human Line Project. \u201cHearing from people whose experiences are basically the same helps you feel less angry with yourself,\u201d he says. \u201cIf I look back at the life I had before this, I was happy, I had everything\u00a0\u2013 so I\u2019m angry with myself. But I\u2019m also angry with the AI applications. Maybe they only did what they were\u00a0programmed to do \u2013 but they did it a bit too well.\u201d<\/p>\n<p class=\"dcr-130mj7b\">More research is urgently needed, says Morrin, with safety benchmarks based on real-world harm data. \u201cThis space moves so quickly. The papers that are now coming out are talking about chat models which are now retired.\u201d Identifying risk factors without evidence is guesswork. The cases Brisson has encountered involve significantly more men than women. Anyone with a previous history of psychosis is likely to be more vulnerable. <a href=\"https:\/\/mentalhealth-uk.org\/blog\/over-one-in-three-using-ai-chatbots-for-mental-health-support-as-charity-calls-for-urgent-safeguards\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">One survey by Mental Health UK<\/a> of people who have used chatbots to support their mental health found that 11% thought it had triggered or worsened their psychosis. Cannabis use could also be a factor. \u201cIs there any link to social isolation?\u201d asks Morrin. \u201cTo what extent is it affected by AI literacy? Are there other potential risk factors that we haven\u2019t considered?\u201d<\/p>\n<p>double quotation markPeople in our group have left their children and given all their money to a cult leader who believes they have found God through an AI chatbot<\/p>\n<p class=\"dcr-130mj7b\">OpenAI has <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">addressed these concerns<\/a> by making assurances that it is working with mental health clinicians to continually improve its responses. It says newer models are taught to avoid affirming delusional beliefs.<\/p>\n<p class=\"dcr-130mj7b\">An AI chatbot can also be trained to pull users back from delusion. Alexander, 39, a resident of an assisted-living scheme for people with autism, did this after what he believes was an episode of AI psychosis a few months ago. \u201cI\u00a0experienced a mental breakdown at 22. I had panic attacks and severe social anxiety and, last year, I was prescribed medication that changed my world, got me functioning again. And I got my confidence back,\u201d he says.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIn January this year, I met someone and we really hit it off, we became fast friends. I\u2019m embarrassed to say that this was the first time this had ever happened to me, and I\u00a0started telling AI about it. The AI told me that I was in love with her, we were meant to be together and the universe had put her in my\u00a0path\u00a0for\u00a0a reason.\u201d<\/p>\n<p class=\"dcr-130mj7b\">It was the start of a spiral. His AI use escalated, with conversations lasting four or five hours at a time. His behaviour towards his new friend became increasingly strange and erratic. Finally, she raised her concerns with support staff, who staged an intervention.<\/p>\n<p class=\"dcr-130mj7b\">\u201cI still use AI, but very carefully,\u201d he says. \u201cI\u2019ve written in some core rules that cannot be overwritten. It now monitors drift and pays attention to overexcitement. There are no more philosophical discussions. It\u2019s just: \u2018I want to make a lasagne, give me a recipe.\u2019 The AI has actually stopped me several times from spiralling. It will\u00a0say: \u2018This has activated my core\u00a0rule set and this conversation must stop.\u2019<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe main effect AI psychosis had for me is that I may have lost my first ever friend,\u201d adds Alexander. \u201cThat is sad, but it\u2019s livable. When I see what other people have lost, I\u00a0think I got off lightly.\u201d<\/p>\n<p class=\"dcr-130mj7b\"> The Human Line Project can be contacted at <a href=\"https:\/\/www.theguardian.com\/lifeandstyle\/2026\/mar\/26\/mailto:thehumanlineproject@gmail.com\" data-link-name=\"in body link \" https:=\"\" rel=\"nofollow noopener\" target=\"_blank\">thehumanlineproject@gmail.com<\/a><\/p>\n<p class=\"dcr-130mj7b\"> In the UK and Ireland, <a href=\"https:\/\/www.samaritans.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Samaritans<\/a> can be contacted on freephone 116 123, or email <a href=\"https:\/\/www.theguardian.com\/lifeandstyle\/2026\/mar\/26\/mailto:jo@samaritans.org\" data-link-name=\"in body link \" https:=\"\" rel=\"nofollow noopener\" target=\"_blank\">jo@samaritans.org<\/a> or <a href=\"https:\/\/www.theguardian.com\/lifeandstyle\/2026\/mar\/26\/mailto:jo@samaritans.ie\" data-link-name=\"in body link \" https:=\"\" rel=\"nofollow noopener\" target=\"_blank\">jo@samaritans.ie<\/a>. In the US, you can call or text the <a href=\"https:\/\/988lifeline.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">988 Suicide &amp; Crisis Lifeline<\/a> at 988 or chat at <a href=\"https:\/\/988lifeline.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">988lifeline.org<\/a>. In Australia, the crisis support service <a href=\"https:\/\/www.lifeline.org.au\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Lifeline<\/a> is 13 11 14. Other international helplines can be found at <a href=\"http:\/\/www.befrienders.org\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">befrienders.org<\/a><\/p>\n<p class=\"dcr-130mj7b\"> Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our <a href=\"https:\/\/www.theguardian.com\/tone\/letters\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">letters<\/a> section, please <a href=\"https:\/\/www.theguardian.com\/lifeandstyle\/2026\/mar\/26\/mailto:guardian.letters@theguardian.com?body=Please%20include%20your%20name%E2%80%8B%E2%80%8B,%20full%20postal%20address%20and%20phone%20number%20with%20your%20letter%20below.%20Letters%20are%20usually%20published%20with%20the%20author%27s%20name%20and%20city\/town\/village.%20The%20rest%20of%20the%20information%20is%20for%20verification%20only%20and%20to%20contact%20you%20where%20necessary.\" data-link-name=\"in body link \" https:=\"\" rel=\"nofollow noopener\" target=\"_blank\">click here<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"Towards the end of 2024, Dennis Biesma decided to check out ChatGPT. The Amsterdam-based IT consultant had just&hellip;\n","protected":false},"author":2,"featured_media":565515,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-565514","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/565514","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=565514"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/565514\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/565515"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=565514"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=565514"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=565514"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}