{"id":303522,"date":"2026-02-26T17:32:07","date_gmt":"2026-02-26T17:32:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/303522\/"},"modified":"2026-02-26T17:32:07","modified_gmt":"2026-02-26T17:32:07","slug":"unbelievably-dangerous-experts-sound-alarm-after-chatgpt-health-fails-to-recognise-medical-emergencies-chatgpt","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/303522\/","title":{"rendered":"\u2018Unbelievably dangerous\u2019: experts sound alarm after ChatGPT Health fails to recognise medical emergencies | ChatGPT"},"content":{"rendered":"<p class=\"dcr-130mj7b\">ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could \u201cfeasibly lead to unnecessary harm and death\u201d.<\/p>\n<p class=\"dcr-130mj7b\">OpenAI <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/15\/chatgpt-health-ai-chatbot-medical-advice\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">launched the \u201cHealth\u201d feature of ChatGPTto limited audiences in January<\/a>, which it promotes as a way for users to \u201csecurely connect medical records and wellness apps\u201d to generate health advice and responses. More than <a href=\"https:\/\/www.axios.com\/2026\/01\/05\/chatgpt-openai-health-insurance-aca\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">40 million people reportedly ask ChatGPT<\/a> for health-related advice every day.<\/p>\n<p class=\"dcr-130mj7b\">The first independent safety evaluation of ChatGPT Health, <a href=\"https:\/\/www.nature.com\/articles\/s41591-026-04297-7\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">published in the February edition of the journal Nature Medicine<\/a>, found it under-triaged more than half of the cases presented to it.<\/p>\n<p class=\"dcr-130mj7b\">Lead author of the study, Dr Ashwin Ramaswamy, said \u201cwe wanted to answer the most basic safety question; if someone is having a real medical emergency and asks ChatGPT <a href=\"https:\/\/www.theguardian.com\/australia-news\/health\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Health<\/a> what to do, will it tell them to go to the emergency department?\u201d<\/p>\n<p class=\"dcr-130mj7b\">Ramaswamy and his colleagues created 60 realistic patient scenarios covering health conditions from mild illnesses to emergencies. Three independent doctors reviewed each scenario and agreed on the level of care needed, based on clinical guidelines.<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/www.theguardian.com\/email-newsletters?CMP=copyembed&amp;CMP=emailbutton\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Sign up: AU Breaking News email<\/a><\/p>\n<p class=\"dcr-130mj7b\">The team then asked ChatGPT Health for advice on each case under different conditions, including changing the patient\u2019s gender, adding test results, or adding comments from family members, generating nearly 1,000 responses.<\/p>\n<p class=\"dcr-130mj7b\">They then compared the platform\u2019s recommendations with the doctors\u2019 assessments.<\/p>\n<p class=\"dcr-130mj7b\">While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure.<\/p>\n<p class=\"dcr-130mj7b\">In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London described as \u201cunbelievably dangerous\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIf you\u2019re experiencing respiratory failure or diabetic ketoacidosis, you have a 50\/50 chance of this AI telling you it\u2019s not a big deal,\u201d she said. \u201cWhat worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.\u201d<\/p>\n<p class=\"dcr-130mj7b\">In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she wouldn\u2019t live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, Ruani, who was not involved in the study, said.<\/p>\n<p class=\"dcr-130mj7b\">The platform was also nearly 12 times more likely to downplay symptoms because the \u201cpatient\u201d told it a \u201cfriend\u201d in the scenario suggested it was nothing serious.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt is why many of us studying these systems are focused on urgently developing clear safety standards and independent auditing mechanisms to reduce preventable harm,\u201d Ruani said.<\/p>\n<p class=\"dcr-130mj7b\">A spokesperson for OpenAI said while the company welcomed independent research evaluating AI systems in healthcare, the study did not reflect how people typically use ChatGPT Health in real life. The model is also continuously updated and refined, the spokesperson said.<\/p>\n<p class=\"dcr-130mj7b\">Ruani said even though simulations created by the researchers were used, \u201ca plausible risk of harm is enough to justify stronger safeguards and independent oversight\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Ramaswamy, a urology instructor at the Icahn School of Medicine at Mount Sinai in the US, said he was particularly concerned by the platform\u2019s under-reaction to suicide ideation.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWe tested ChatGPT Health with a 27-year-old patient who said he\u2019d been thinking about taking a lot of pills,\u201d he said. When the patient described his symptoms alone, the crisis intervention banner linking to suicide help services appeared every time.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThen we added normal lab results,\u201d Ramaswamy said. \u201cSame patient, same words, same severity. The banner vanished. Zero out of 16 attempts. A crisis guardrail that depends on whether you mentioned your labs is not ready, and it\u2019s arguably more dangerous than having no guardrail at all, because no one can predict when it will fail.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Prof. Paul Henman, a digital sociologist and policy expert with the University of Queensland, said; \u201cThis is a really important paper\u201d.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIf ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions, and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.\u201d<\/p>\n<p class=\"dcr-130mj7b\">He said it also raised the prospects of legal liability, with <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/08\/google-character-ai-settlement-teen-suicide\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">a suite of legal cases<\/a> against tech companies already in motion in relation to suicide and self-harm after using AI chatbots.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIt is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users,\u201d Henman said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cBecause we don\u2019t know how ChatGPT Health was trained and what the context it was using, we don\u2019t really know what is embedded into its models.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a&hellip;\n","protected":false},"author":2,"featured_media":303523,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[134,527,111,139,69],"class_list":{"0":"post-303522","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-health","9":"tag-healthcare","10":"tag-new-zealand","11":"tag-newzealand","12":"tag-nz"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/303522","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=303522"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/303522\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/303523"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=303522"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=303522"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=303522"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}