{"id":245173,"date":"2025-10-28T00:10:06","date_gmt":"2025-10-28T00:10:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/245173\/"},"modified":"2025-10-28T00:10:06","modified_gmt":"2025-10-28T00:10:06","slug":"more-than-a-million-people-every-week-show-suicidal-intent-when-chatting-with-chatgpt-openai-estimates-technology","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/245173\/","title":{"rendered":"More than a million people every week show suicidal intent when chatting with ChatGPT, OpenAI estimates | Technology"},"content":{"rendered":"<p class=\"dcr-130mj7b\">More than a million <a href=\"https:\/\/www.theguardian.com\/technology\/chatgpt\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a> users each week send messages that include \u201cexplicit indicators of potential suicidal planning or intent\u201d, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.<\/p>\n<p class=\"dcr-130mj7b\">In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week \u2013 about 560,000 of its touted <a href=\"https:\/\/techcrunch.com\/2025\/10\/06\/sam-altman-says-chatgpt-has-hit-800m-weekly-active-users\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">800m weekly users<\/a> \u2013 show \u201cpossible signs of mental health emergencies related to psychosis or mania\u201d. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.<\/p>\n<p class=\"dcr-130mj7b\">As OpenAI releases data on mental health issues related to its marquee product, the company is facing increased scrutiny <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/oct\/22\/openai-chatgpt-lawsuit\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">following a highly publicized lawsuit<\/a> from the family of a teenage boy who died by suicide after extensive engagement with ChatGPT. The Federal Trade Commission last month additionally <a href=\"https:\/\/www.ftc.gov\/news-events\/news\/press-releases\/2025\/09\/ftc-launches-inquiry-ai-chatbots-acting-companions\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">launched a broad investigation<\/a> into companies that create AI chatbots, including OpenAI, to find how they measure negative impacts on children and teens.<\/p>\n<p class=\"dcr-130mj7b\">OpenAI claimed in its post that its recent <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/aug\/07\/openai-chatgpt-upgrade-big-step-forward-human-jobs-gpt-5\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">GPT-5 update<\/a> reduced the number of undesirable behaviors from its product and improved user safety in a model evaluation involving more than 1,000 self-harm and suicide conversations. The company did not immediately return a request for comment.<\/p>\n<p class=\"dcr-130mj7b\">\u201cOur new automated evaluations score the new GPT\u20115 model at 91% compliant with our desired behaviors, compared to 77% for the previous GPT\u20115 model,\u201d the company\u2019s post reads.<\/p>\n<p class=\"dcr-130mj7b\">OpenAI stated that GPT-5 expanded access to crisis hotlines and added reminders for users to take breaks during long sessions. To make improvements to the model, the company said it enlisted 170 clinicians from its Global Physician Network of health care experts to assist its research over recent months, which included rating the safety of its model\u2019s responses and helping write the chatbot\u2019s answers to mental-health related questions.<\/p>\n<p class=\"dcr-130mj7b\">\u201cAs part of this work, psychiatrists and psychologists reviewed more than 1,800 model responses involving serious mental health situations and compared responses from the new GPT\u20115 chat model to previous models,\u201d OpenAI said. The company\u2019s definition of \u201cdesirable\u201d involved determining whether a group of its experts reached the same conclusion about what would be an appropriate response in certain situations.<\/p>\n<p class=\"dcr-130mj7b\">AI researchers and public health advocates have long been wary of chatbots\u2019 <a href=\"http:\/\/v\" data-link-name=\"in body link\" rel=\"nofollow\">propensity to affirm users\u2019 decisions<\/a> or delusions regardless of whether they may be harmful, an issue known as sycophancy. Mental health experts have <a href=\"https:\/\/www.theguardian.com\/society\/2025\/aug\/30\/therapists-warn-ai-chatbots-mental-health-support\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">also been concerned<\/a> about people using AI chatbots for psychological support and warned how it could harm vulnerable users.<\/p>\n<p class=\"dcr-130mj7b\">The language in OpenAI\u2019s post distances the company from any potential causal links between its product and the mental health crises that its users are experiencing.<\/p>\n<p><a data-ignore=\"global-link-styling\" href=\"#EmailSignup-skip-link-10\" class=\"dcr-jzxpee\">skip past newsletter promotion<\/a><\/p>\n<p class=\"dcr-1xjndtj\">A weekly dive in to how technology is shaping our lives<\/p>\n<p>Privacy Notice: Newsletters may contain information about charities, online ads, and content funded by outside parties. If you do not have an account, we will create a guest account for you on <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">theguardian.com<\/a> to send you this newsletter. You can complete full registration at any time. For more information about how we use your data see our <a data-ignore=\"global-link-styling\" href=\"https:\/\/www.theguardian.com\/help\/privacy-policy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a>. We use Google reCaptcha to protect our website and the Google <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/privacy\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Privacy Policy<\/a> and <a data-ignore=\"global-link-styling\" href=\"https:\/\/policies.google.com\/terms\" rel=\"noreferrer nofollow noopener\" class=\"dcr-1rjy2q9\" target=\"_blank\">Terms of Service<\/a> apply.<\/p>\n<p id=\"EmailSignup-skip-link-10\" tabindex=\"0\" aria-label=\"after newsletter promotion\" role=\"note\" class=\"dcr-jzxpee\">after newsletter promotion<\/p>\n<p class=\"dcr-130mj7b\">\u201cMental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations,\u201d OpenAI\u2019s post stated.<\/p>\n<p class=\"dcr-130mj7b\">OpenAI\u2019s CEO <a href=\"https:\/\/www.theguardian.com\/technology\/sam-altman\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Sam Altman<\/a> earlier this month claimed in a post on X that the company had made advancements in treating mental health issues, announcing that OpenAI would ease restrictions and soon begin to allow adults to create erotic content.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful\/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,\u201d Altman posted. \u201cNow that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"More than a million ChatGPT users each week send messages that include \u201cexplicit indicators of potential suicidal planning&hellip;\n","protected":false},"author":2,"featured_media":245174,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-245173","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/245173","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=245173"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/245173\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/245174"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=245173"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=245173"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=245173"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}