{"id":358146,"date":"2026-04-01T06:51:16","date_gmt":"2026-04-01T06:51:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/358146\/"},"modified":"2026-04-01T06:51:16","modified_gmt":"2026-04-01T06:51:16","slug":"why-ai-health-chatbots-wont-make-you-better-at-diagnosing-yourself-new-research","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/358146\/","title":{"rendered":"Why AI health chatbots won\u2019t make you better at diagnosing yourself \u2013 new research"},"content":{"rendered":"<p>Millions of people are turning to artificial intelligence (AI) chatbots for advice on everything from cooking to tax returns. Increasingly, they are also asking chatbots about their health. <\/p>\n<p>But as the UK\u2019s chief medical officer recently <a href=\"https:\/\/www.pulsetoday.co.uk\/news\/clinical-areas\/cancer\/gps-forced-to-undo-incorrect-ai-information-patients-read-says-chief-medical-officer\/\" rel=\"nofollow noopener\" target=\"_blank\">warned<\/a>, that may not be wise when it comes to medical decisions. In a <a href=\"https:\/\/www.nature.com\/articles\/s41591-025-04074-y\" rel=\"nofollow noopener\" target=\"_blank\">recent study<\/a>, colleagues and I tested how well <a href=\"https:\/\/theconversation.com\/topics\/large-language-models-130671\" rel=\"nofollow noopener\" target=\"_blank\">large language model<\/a> (LLM) chatbots help the public deal with common health problems. The results were striking. <\/p>\n<p>The chatbots we tested were not ready to act as doctors. A common response to studies like this is that AI moves faster than academic publishing. By the time a paper appears, the models tested may already have been updated. But <a href=\"https:\/\/www.nature.com\/articles\/s41591-026-04297-7\" rel=\"nofollow noopener\" target=\"_blank\">studies<\/a> using newer versions of these systems for patient triage suggest the same problems remain. <\/p>\n<p>We gave participants brief descriptions of common medical situations. They were randomly assigned either to use one of three widely available chatbots or to rely on whatever sources they would normally use at home. After interacting with the chatbot, we asked two questions: what condition might explain the symptoms? And where should they seek help?<\/p>\n<p>People who used chatbots were less likely to identify the correct condition than those who didn\u2019t. They were also no better at determining the right place to seek care than the control group. In other words, interacting with a chatbot did not help people make better health decisions. <\/p>\n<p>Strong knowledge, weak outcomes<\/p>\n<p>This does not mean the models lack medical knowledge because LLMs can pass medical licensing exams <a href=\"https:\/\/journals.plos.org\/digitalhealth\/article?id=10.1371\/journal.pdig.0000198\" rel=\"nofollow noopener\" target=\"_blank\">with ease<\/a>. When we removed the human element and gave the same scenarios directly to the chatbots, their performance improved dramatically. Without human involvement, the models identified relevant conditions in the vast majority of cases and often suggested appropriate levels of care.<\/p>\n<p>So why did the results deteriorate when people actually used the systems? When we looked at the conversations, the problems emerged. Chatbots frequently mentioned the relevant diagnosis somewhere in the conversation, yet participants did not always notice or remember it when summarising their final answer. <\/p>\n<p>In other cases, users provided incomplete information or the chatbot misinterpreted key details. The issue was not simply a failure of medical knowledge \u2013 it was a failure of communication between human and machine. <\/p>\n<p>The study shows that policymakers need information about real-world performance of technology before introducing it into high-stakes settings such as frontline healthcare. Our findings highlight an important limitation of many current evaluations of AI in medicine. Language models often perform extremely well on structured exam questions or simulated \u201cmodel-to-model\u201d interactions. <\/p>\n<p>But real-world use is much messier. Patients describe symptoms in vague or incomplete way and can misunderstand explanations. They ask questions in unpredictable sequences. A system that performs impressively on benchmarks may behave very differently once real people begin interacting with it. <\/p>\n<p>            <img decoding=\"async\" alt=\"A doctor using artificial intelligence technology for medical support\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2026\/04\/file-20260319-57-lpoju0.jpg\" class=\"native-lazy\" loading=\"lazy\"  \/><\/p>\n<p>              AI may be better used as a medical secretary.<br \/>\n              <a class=\"source\" href=\"https:\/\/www.shutterstock.com\/image-photo\/ai-assistant-chatbot-healthcare-doctor-using-2669965261?trackingId=7bcecc00-d70b-4740-8c90-c5b3c9999fdb&amp;listId=searchResults\" rel=\"nofollow noopener\" target=\"_blank\">ST_Travel\/Shutterstock<\/a><\/p>\n<p>It also underscores a broader point about clinical care. As a GP, my job involves far more than recalling facts. Medicine is often described as an art rather than a science. A consultation isn\u2019t simply about identifying the correct diagnosis. It involves interpreting a patient\u2019s story, exploring uncertainty and negotiating decisions. <\/p>\n<p>Medical educators have long recognised this complexity. For decades, future doctors were taught using the <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/8736242\/\" rel=\"nofollow noopener\" target=\"_blank\">Calgary\u2013Cambridge<\/a> model. This meant building a rapport with the patient, gathering information through careful questioning, understanding the patient\u2019s concerns and expectations, explaining findings clearly and agreeing a shared plan for management. <\/p>\n<p>All these processes rely on human connection, tailored communication, clarification, gentle probing, judgement shaped by context and trust. These qualities cannot easily be reduced to pattern recognition.<\/p>\n<p>A different role for AI<\/p>\n<p>Yet the lesson from our study is not that AI has no place in healthcare. Far from it. The key is understanding what these systems are currently good at and where their limitations lie.<\/p>\n<p>One useful way to think about today\u2019s chatbots is that they function more like secretaries than physicians. They are remarkably effective at organising information, summarising text and structuring complex documents. These are the kinds of tasks where language models are already proving <a href=\"https:\/\/www.thelancet.com\/journals\/lanprc\/article\/PIIS3050-5143(25)00078-0\/fulltext\" rel=\"nofollow noopener\" target=\"_blank\">useful<\/a> within healthcare systems, for example in drafting clinical notes, summarising patient records or generating referral letters. <\/p>\n<p>The promise of AI in medicine remains real, but its role is likely to be more supportive than revolutionary in the near term. Chatbots should not be expected to act as the front door to healthcare. They are not ready to diagnose conditions or direct patients to the right level of care. <\/p>\n<p>Artificial intelligence may be able to pass medical exams. But just as passing a theory test doesn\u2019t make you a competent driver, practising medicine involves far more than answering questions correctly. It requires judgement, empathy and the ability to navigate the complexity that sits behind every clinical encounter. For now, at least, that requires people rather than bots.<\/p>\n<p>            <img decoding=\"async\" alt=\"\" src=\"https:\/\/www.newsbeep.com\/nz\/wp-content\/uploads\/2026\/04\/file-20260302-75-afjhz1.gif\" class=\"native-lazy\" loading=\"lazy\"  \/><\/p>\n<p>AI has long been discussed as a threat to jobs and livelihoods. But what\u2019s the reality? In <a href=\"https:\/\/theconversation.com\/topics\/ai-in-the-workplace-139731\" rel=\"nofollow noopener\" target=\"_blank\">this series<\/a>, we explore the impact it is already having on different occupations \u2013 and how people really feel about their AI assistants.<\/p>\n","protected":false},"excerpt":{"rendered":"Millions of people are turning to artificial intelligence (AI) chatbots for advice on everything from cooking to tax&hellip;\n","protected":false},"author":2,"featured_media":358147,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[134,527,111,139,69],"class_list":{"0":"post-358146","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-health","9":"tag-healthcare","10":"tag-new-zealand","11":"tag-newzealand","12":"tag-nz"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/358146","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=358146"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/358146\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/358147"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=358146"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=358146"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=358146"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}