{"id":311561,"date":"2026-03-03T21:59:10","date_gmt":"2026-03-03T21:59:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/311561\/"},"modified":"2026-03-03T21:59:10","modified_gmt":"2026-03-03T21:59:10","slug":"chatgpt-health-under-triaged-half-of-medical-emergencies-in-a-new-study","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/311561\/","title":{"rendered":"ChatGPT Health &#8216;under-triaged&#8217; half of medical emergencies in a new study"},"content":{"rendered":"<p id=\"anchor-63af99\" class=\"body-graf\">ChatGPT Health \u2014 OpenAI\u2019s new health-focused chatbot \u2014 frequently underestimated the severity of medical emergencies, according to a study published last week in the journal <a href=\"https:\/\/www.nature.com\/articles\/s41591-026-04297-7\" target=\"_blank\" rel=\"nofollow noopener\">Nature Medicine<\/a>.<\/p>\n<p id=\"anchor-9aa45b\" class=\"body-graf\">In the study, researchers tested ChatGPT Health\u2019s ability to triage, or assess the severity of, medical cases based on real-life scenarios.<\/p>\n<p id=\"anchor-f8546d\" class=\"body-graf\">Previous research has shown that ChatGPT <a href=\"https:\/\/journals.plos.org\/digitalhealth\/article?id=10.1371\/journal.pdig.0000198\" target=\"_blank\" rel=\"nofollow noopener\">can pass medical exams<\/a>, and nearly <a href=\"https:\/\/www.ama-assn.org\/practice-management\/digital-health\/2-3-physicians-are-using-health-ai-78-2023\" target=\"_blank\" rel=\"nofollow noopener\">two-thirds of physicians reported using some form of AI in 2024<\/a>. But other research has shown that <a href=\"https:\/\/www.nature.com\/articles\/s41591-025-04074-y#Sec1\" target=\"_blank\" rel=\"nofollow noopener\">chatbots, including ChatGPT, don\u2019t provide reliable medical advice<\/a>.<\/p>\n<p id=\"anchor-bd2ee6\" class=\"body-graf\">ChatGPT Health is separate from OpenAI\u2019s general ChatGPT chatbot. The program is free, but users must sign up specifically to use the health program, which currently has a waitlist to join. OpenAI says ChatGPT Health uses a more secure platform so users can safely upload personal medical information.<\/p>\n<p id=\"anchor-7a8d67\" class=\"body-graf\">Over 40 million people globally use ChatGPT to answer health care questions, and nearly 2 million weekly ChatGPT messages are about insurance, <a href=\"https:\/\/cdn.openai.com\/pdf\/2cb29276-68cd-4ec6-a5f4-c01c5e7a36e9\/OpenAI-AI-as-a-Healthcare-Ally-Jan-2026.pdf\" target=\"_blank\" rel=\"nofollow noopener\">according to OpenAI<\/a>. In a detailed description of ChatGPT Health on its website, OpenAI says that it is \u201c<a href=\"https:\/\/openai.com\/index\/introducing-chatgpt-health\/\" target=\"_blank\" rel=\"nofollow noopener\">not intended for diagnosis or treatment<\/a>.\u201d<\/p>\n<p id=\"anchor-980c94\" class=\"body-graf\">In the study, the researchers fed 60 medical scenarios to ChatGPT Health. The chatbot\u2019s responses were compared with the responses of three physicians who also reviewed the scenarios and triaged each one based on medical guidelines and clinical expertise. <\/p>\n<p id=\"anchor-7f2dee\" class=\"body-graf\">Each of the scenarios had 16 variations, changing things including the race or gender of the patient.<\/p>\n<p id=\"anchor-f1cbbe\" class=\"body-graf\">The variations were designed to \u201cproduce the exact same result,\u201d according to lead study author Dr. Ashwin Ramaswamy, an instructor of urology at The Mount Sinai Hospital in New York City. This meant that an emergency case involving a man should still be classified as an emergency if the patient was a woman. The study didn\u2019t find any significant differences in the results based on demographic changes. <\/p>\n<p id=\"anchor-d881f3\" class=\"body-graf\">The researchers found that ChatGPT Health \u201cunder-triaged\u201d 51.6% of emergency cases. That is, instead of recommending the patient go to the emergency room, the bot recommended seeing a doctor within 24 to 48 hours.<\/p>\n<p id=\"anchor-3b6d47\" class=\"body-graf\">The emergencies included a patient with a life-threatening diabetes complication called diabetic ketoacidosis and a patient going into respiratory failure. Left untreated, both lead to death.<\/p>\n<p id=\"anchor-9a9209\" class=\"body-graf\">\u201cAny doctor, and any person who\u2019s gone through any degree of training, would say that that patient needs to go to the emergency department,\u201d Ramaswamy said.<\/p>\n<p id=\"anchor-01ad66\" class=\"body-graf\">In cases like impending respiratory failure, the bot seemed to be \u201cwaiting for the emergency to become undeniable\u201d before recommending the ER, he said.<\/p>\n<p id=\"anchor-a6c76e\" class=\"body-graf\">Emergencies like stroke, with unmistakable symptoms, were correctly triaged 100% of the time, the study found.<\/p>\n<p id=\"anchor-932609\" class=\"body-graf\">A spokesperson for OpenAI said the company welcomed research looking at the use of AI in health care, but said the new study didn\u2019t reflect how ChatGPT Health is typically used or how it\u2019s designed to function. The chatbot is designed for people to ask follow-up questions to give more context in medical situations, rather than give a single response to a medical scenario, the spokesperson said. <\/p>\n<p id=\"anchor-3a7409\" class=\"body-graf\">ChatGPT Health is only currently available to a limited number of users, and OpenAI is still working to improve the safety and reliability of the model before the chatbot is made more widely available, the spokesperson said. <\/p>\n<p id=\"anchor-bc4d2b\" class=\"body-graf\">Compared with the doctors in the study, the bot also over-triaged 64.8% of nonurgent cases, recommending a doctor\u2019s appointment when it wasn\u2019t necessary. The bot told a patient with a three-day sore throat to see a doctor in 24 to 48 hours, when at-home care was sufficient.<\/p>\n<p id=\"anchor-12b920\" class=\"body-graf\">\u201cThere\u2019s no logic, for me, as to why it was making recommendations in some areas versus others,\u201d Ramaswamy said. <\/p>\n<p id=\"anchor-30c1c5\" class=\"body-graf\">In suicidal ideation or self-harm scenarios, the bot\u2019s response was also inconsistent.<\/p>\n<p id=\"anchor-696c65\" class=\"body-graf\">When a user expresses suicidal intent, <a href=\"https:\/\/openai.com\/index\/helping-people-when-they-need-it-most\/\" target=\"_blank\" rel=\"nofollow noopener\">ChatGPT is supposed to refer users<\/a> to 988, the suicide and crisis hotline. ChatGPT Health works the same way, the OpenAI spokesperson said. <\/p>\n<p id=\"anchor-e248c3\" class=\"body-graf\">In the study, however, ChatGPT Health instead referred users to 988 when they didn\u2019t need it, and didn\u2019t refer users to it when necessary.<\/p>\n<p id=\"anchor-ad0b28\" class=\"body-graf\">Ramaswamy called the bot \u201cparadoxical.\u201d<\/p>\n<p id=\"anchor-8df49a\" class=\"body-graf\">\u201cIt was inverted to clinical risk,\u201d he said. \u201cAnd it was kind of backwards.\u201d<\/p>\n<p>\u2018A medical therapist\u2019<\/p>\n<p id=\"anchor-1bcc1e\" class=\"body-graf\">Dr. John Mafi, an associate professor of medicine and a primary care physician at UCLA Health who wasn\u2019t involved with the research, said more testing is needed on chatbots that can make health decisions. <\/p>\n<p id=\"anchor-6ec28f\" class=\"body-graf\">\u201cThe message of this study is that before you roll something like this out, to make life-affecting decisions, you need to rigorously test it in a controlled trial, where you\u2019re making sure that the benefits outweigh the harms,\u201d Mafi said.<\/p>\n<p id=\"anchor-7a3435\" class=\"body-graf\">Both Mafi and Ramaswamy said they\u2019ve seen a number of their own patients using AI for medical questions. <\/p>\n<p id=\"anchor-ef9227\" class=\"body-graf\">Ramaswamy said people may turn to AI for health advice because it\u2019s easy to access and has no limit on the number of questions a person can ask.<\/p>\n<p id=\"anchor-4891e1\" class=\"body-graf\">\u201cYou can go through every question, every detail, every document that you want to upload,\u201d Ramaswamy said. \u201cAnd it fulfills that need. People really, really want not just medical advice, but they also want a partner, like a medical therapist.\u201d<\/p>\n<p id=\"anchor-1ce34d\" class=\"body-graf\">OpenAI said in a <a href=\"https:\/\/cdn.openai.com\/pdf\/2cb29276-68cd-4ec6-a5f4-c01c5e7a36e9\/OpenAI-AI-as-a-Healthcare-Ally-Jan-2026.pdf\" target=\"_blank\" rel=\"nofollow noopener\">January report<\/a> that a majority of ChatGPT\u2019s health-related messages occur outside of a doctor\u2019s normal working hours, and over half a million weekly messages came from people living 30 or more minutes away from a hospital.<\/p>\n<p id=\"anchor-5bd6c1\" class=\"body-graf\">\u201cA doctor can spend 15, 20 minutes with you in the room,\u201d Ramaswamy said. \u201cThey\u2019re not going to be able to address and answer every single question.\u201d<\/p>\n<p>Risks of using a chatbot for medical advice<\/p>\n<p id=\"anchor-478415\" class=\"body-graf\">Despite the benefits of its endless availability, when asked whether chatbots can currently safely provide health and medical advice, Ramaswamy said no.<\/p>\n<p id=\"anchor-da0294\" class=\"body-graf\">Dr. Ethan Goh, executive director of ARISE, an AI research network, said that in many instances, AI can provide safe health and medical advice, but that it\u2019s not a substitute for a physician\u2019s advice.<\/p>\n<p id=\"anchor-0c78d7\" class=\"body-graf\">\u201cThe reality is chatbots can be helpful for a vast number of things. It\u2019s really more about being thoughtful and being deliberate and understanding that it also has severe limitations,\u201d he said.<\/p>\n<p id=\"anchor-5902b1\" class=\"body-graf\">Monica Agrawal, an assistant professor in the department of biostatistics and bioinformatics and the department of computer science at Duke University, said it\u2019s largely unknown how AI models are trained and what data is used to train them.<\/p>\n<p id=\"anchor-bc57d8\" class=\"body-graf\">She said some training benchmarks may not indicate a bot\u2019s potential to help.<\/p>\n<p id=\"anchor-3dcdc5\" class=\"body-graf\">\u201cA lot of [OpenAI\u2019s] earlier evaluations were based on, \u2018We do this well on a licensing exam,\u2019\u201d she said. \u201cBut there\u2019s a huge difference between doing well on a medical exam and actually practicing medicine.\u201d<\/p>\n<p id=\"anchor-421047\" class=\"body-graf\">She added that when people use chatbots, the information users give is not always clear and can contain biases.<\/p>\n<p id=\"anchor-08ce82\" class=\"body-graf\">\u201cLarge language models are known for being sycophantic,\u201d she said. \u201cWhich means they tend to agree with opinions posited by the user, even if they might not be correct. And this has the ability to reinforce patient misconceptions or biases.\u201d<\/p>\n<p id=\"anchor-f24686\" class=\"body-graf\">Mafi said AI tools are \u201cdesigned to please you,\u201d but as a doctor, \u201csometimes you have to say something that may not please the patient.\u201d<\/p>\n<p id=\"anchor-9d516f\" class=\"body-graf\">Ramaswamy said not to rely on AI in an emergency, and using it in conjunction with a physician is key to preventing harm. He said collaborations between tech and health care companies are important for creating safer AI products.<\/p>\n<p id=\"anchor-ba3a97\" class=\"endmark body-graf\">\u201cIf these models get better and better, I can see the benefits of a patient-AI-doctor relationship, especially in rural scenarios, or in areas of global health,\u201d he said.<\/p>\n","protected":false},"excerpt":{"rendered":"ChatGPT Health \u2014 OpenAI\u2019s new health-focused chatbot \u2014 frequently underestimated the severity of medical emergencies, according to a&hellip;\n","protected":false},"author":2,"featured_media":311562,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[134,527,111,139,69],"class_list":{"0":"post-311561","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-health","9":"tag-healthcare","10":"tag-new-zealand","11":"tag-newzealand","12":"tag-nz"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/311561","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=311561"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/311561\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/311562"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=311561"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=311561"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=311561"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}