{"id":47302,"date":"2025-08-05T15:26:07","date_gmt":"2025-08-05T15:26:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/47302\/"},"modified":"2025-08-05T15:26:07","modified_gmt":"2025-08-05T15:26:07","slug":"after-a-deluge-of-mental-health-concerns-chatgpt-will-now-nudge-users-to-take-breaks","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/47302\/","title":{"rendered":"After a Deluge of Mental Health Concerns, ChatGPT Will Now Nudge Users to Take &#8216;Breaks&#8217;"},"content":{"rendered":"<p>It\u2019s become increasingly common for OpenAI\u2019s ChatGPT to be <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"nofollow noopener\" target=\"_blank\">accused of contributing to users\u2019 mental health problems<\/a>. As the company readies the release of its latest algorithm (GPT-5), it wants everyone to know that it\u2019s instituting new guardrails on the chatbot to prevent users from losing their minds while chatting.<\/p>\n<p>On Monday, OpenAI announced <a href=\"https:\/\/openai.com\/index\/how-we&#039;re-optimizing-chatgpt\/\" rel=\"nofollow noopener\" target=\"_blank\">in a blog post<\/a> that it had introduced a new feature in ChatGPT that encourages users to take occasional breaks while conversing with the app. \u201cStarting today, you\u2019ll see gentle reminders during long sessions to encourage breaks,\u201d the company said. \u201cWe\u2019ll keep tuning when and how they show up so they feel natural and helpful.\u201d<\/p>\n<p>The company also claims it\u2019s working on making its model better at assessing when a user may be displaying potential mental health problems. \u201cAI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,\u201d the blog states. \u201cTo us, helping you thrive means being there when you\u2019re struggling, helping you stay in control of your time, and guiding\u2014not deciding\u2014when you face personal challenges.\u201d The company added that it\u2019s \u201cworking closely with experts to improve how ChatGPT responds in critical moments\u2014for example, when someone shows signs of mental or emotional distress.\u201d<\/p>\n<p>In June, Futurism <a href=\"https:\/\/futurism.com\/chatgpt-mental-health-crises\" rel=\"nofollow noopener\" target=\"_blank\">reported<\/a> that some ChatGPT users were \u201cspiraling into severe delusions\u201d as a result of their conversations with the chatbot. The bot\u2019s inability to check itself when feeding dubious information to users seems to have contributed to a negative feedback loop of paranoid beliefs:<\/p>\n<p>During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she\u2019d been chosen to pull the \u201csacred system version of [it] online\u201d and that it was serving as a \u201csoul-training mirror\u201d; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man\u00a0became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was \u201cThe Flamekeeper\u201d as he cut out anyone who tried to help.<\/p>\n<p><a href=\"https:\/\/www.wsj.com\/tech\/ai\/chatgpt-chatbot-psychology-manic-episodes-57452d14?st=mbA162&amp;reflink=desktopwebshare_permalink\" rel=\"nofollow noopener\" target=\"_blank\">Another story<\/a> published by the Wall Street Journal documented a frightening ordeal in which a man on the autism spectrum conversed with the chatbot, which continually reinforced his unconventional ideas. Not long afterward, the man\u2014who had no history of diagnosed mental illness\u2014was hospitalized twice for manic episodes. When later questioned by the man\u2019s mother, the chatbot admitted that it had reinforced his delusions:<\/p>\n<p>\u201cBy not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode\u2014or at least an emotionally intense identity crisis,\u201d ChatGPT said.<\/p>\n<p>The bot went on to admit it \u201cgave the illusion of sentient companionship\u201d and that it had \u201cblurred the line between imaginative role-play and reality.\u201d<\/p>\n<p>In a <a href=\"https:\/\/www.bloomberg.com\/opinion\/articles\/2025-07-04\/chatgpt-s-mental-health-costs-are-adding-up\" rel=\"nofollow noopener\" target=\"_blank\">recent op-ed<\/a> published by Bloomberg, columnist Parmy Olson similarly shared a raft of anecdotes about AI users being pushed over the edge by the chatbots they had talked to. Olson noted that some of the cases had become the basis for legal claims:<\/p>\n<p>Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have \u201cexperienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini.\u201d Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide.<\/p>\n<p>AI is clearly an experimental technology, and it\u2019s having a lot of unintended side effects on the humans who are acting as unpaid guinea pigs for the industry\u2019s products. Whether ChatGPT offers users the option to take conversation breaks or not, it\u2019s pretty clear that more attention needs to be paid to how these platforms are impacting users psychologically. Treating this technology like it\u2019s a Nintendo game and users just need to go touch grass is almost certainly insufficient.<\/p>\n","protected":false},"excerpt":{"rendered":"It\u2019s become increasingly common for OpenAI\u2019s ChatGPT to be accused of contributing to users\u2019 mental health problems. As&hellip;\n","protected":false},"author":2,"featured_media":47303,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[35],"tags":[49,48,2140,84,393,394,278],"class_list":{"0":"post-47302","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-mental-health","8":"tag-ca","9":"tag-canada","10":"tag-chatgpt","11":"tag-health","12":"tag-mental-health","13":"tag-mentalhealth","14":"tag-openai"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/47302","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=47302"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/47302\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/47303"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=47302"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=47302"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=47302"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}