{"id":359350,"date":"2025-12-20T00:19:21","date_gmt":"2025-12-20T00:19:21","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/359350\/"},"modified":"2025-12-20T00:19:21","modified_gmt":"2025-12-20T00:19:21","slug":"will-artificial-intelligence-transform-health-care-for-the-better","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/359350\/","title":{"rendered":"Will Artificial Intelligence Transform Health Care for the Better?"},"content":{"rendered":"<p class=\"has-drop-cap\">Diego Martinez made a diving catch for the gently descending softball. At 48, he was getting old for such theatrics, but he enjoyed showing off for the small crowd of friends and family members. As he got up from the turf, Martinez felt a pain in his back. A pulled muscle, he thought. But when he sat on the bench, it got worse. Soon he was sweating and breathing hard. \u201cYou don\u2019t look right,\u201d his wife said.<\/p>\n<p>By the time the ambulance arrived, Martinez was having trouble moving his legs or even speaking clearly. As they loaded him into the van, the EMTs assumed he was having a stroke, or perhaps a heart attack. Here\u2019s where Martinez caught his first break. His city\u2019s emergency medical service had recently rolled out an AI-assisted \u201csmart ambulance\u201d system that helped the EMTs improve their pre-hospital triage protocol. Data in the cloud-based system detailed what resources were available at each nearby hospital. Within moments, it recommended a trauma facility suited to Martinez\u2019s condition and planned the fastest route. By the time the ambulance pulled into the ER bay, the medical team already had his ECG and other vital data downloaded. Within minutes, they were wheeling him to radiology.<\/p>\n<p class=\"cta-heading\" style=\"line-height: 28px;\">Finally, a reason to check your email.<\/p>\n<p class=\"cta-subheading\" style=\"line-height: 22px;\">Sign up for our free newsletter today.<\/p>\n<p>The hospital\u2019s radiologists had seen thousands of similar cases. But they wouldn\u2019t be relying on their experience alone. For every type of CT, MRI, or other scan available, the hospital had an AI software package designed to search the images for subtle patterns that even the most seasoned radiologist might miss. Whatever was wrong with Diego Martinez, this combination of experienced physicians and the massive power of machine-learning AI gave him a fighting chance.<\/p>\n<p>The scenario above is invented but based on technology already in use or arriving soon in most major hospitals. After decades of painstaking research and development, the U.S. health-care system is embracing artificial intelligence at a dizzying rate. Ten years ago, roughly 40 clinical AI systems had been approved under the FDA\u2019s AI-as-medical-device protocol. Today, more than <a href=\"https:\/\/www.fda.gov\/medical-devices\/software-medical-device-samd\/artificial-intelligence-enabled-medical-devices\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">1,200 AI applications<\/a> are FDA-approved, most designed to help doctors read radiological scans. Other systems predict which ER patients face an elevated risk of <a href=\"https:\/\/health.ucsd.edu\/news\/press-releases\/2024-01-23-study-ai-surveillance-tool-successfully-helps-to-predict-sepsis-saves-lives\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">sepsis<\/a> or other complications during their hospital stay. Machine-learning programs also help hospitals manage staffing levels, improve workflows, and track supplies. In addition, an uncounted number of nurses, doctors, and hospital managers rely on ChatGPT and other large language model (LLM) platforms to help keep records, compose e-mails, and handle administrative tasks. Some hospitals are even experimenting with AI chatbots that advise doctors on diagnoses and treatments.<\/p>\n<p>In his book Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (2019), cardiologist and digital-medicine expert Eric Topol described AI as \u201ca beacon of hope\u201d that could \u201cautomate mundane tasks, reduce human error, and provide support in clinical decision-making, thereby streamlining the entire process of patient care.\u201d Many AI critics fear that the technology will depersonalize interactions now handled by humans. But, Topol argued, by freeing doctors from endless digital paperwork, AI could help them give more attention to patients. AI\u2019s greatest promise, he wrote, \u201cis the opportunity to restore the precious and time-honored connection and trust\u2014the human touch\u2014between patients and doctors.\u201d<\/p>\n<p>That was an ambitious agenda for a technology that was\u2014then and now\u2014regarded with trepidation by countless Americans, including many health-care workers. In the intervening years, Topol himself has often drawn attention to the risks and limitations of medical AI applications. But when I asked him whether he remains upbeat about AI\u2019s potential, he replied that he is \u201cmore optimistic\u201d today than in 2019.<\/p>\n<p>The AI-driven transformation of health care is in its early stages, so precise estimates of its benefits are hard to come by. But researchers predict significant improvements in cancer screening, cardiac care, and other branches of medicine. In theory, more efficient diagnoses and treatments should also bring down costs. A <a href=\"https:\/\/www.nber.org\/papers\/w30857\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">2023 study<\/a> from McKinsey &amp; Co. and the National Bureau of Economic Research estimated that widespread adoption of AI could lead to reductions of 5 percent to 10 percent in annual U.S. health-care spending.<\/p>\n<p>Nonetheless, the benefits that AI could bring to American health care are far from assured. For one thing, the AI revolution writ large may be hitting a speed bump. AI pioneer <a href=\"https:\/\/garymarcus.substack.com\/p\/is-this-the-moment-when-the-generative\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Gary Marcus<\/a>, for example, has long argued that extravagant predictions of AI dominance are overblown. The latest LLMs from Google, OpenAI, and other companies require exponentially larger investments in infrastructure and power consumption to yield smaller increments in improved performance. In its August debut, OpenAI\u2019s hotly anticipated ChatGPT-5 \u201clanded with a dull thud,\u201d the <a href=\"https:\/\/futurism.com\/gpt-5-disaster\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">tech press<\/a> concluded. Even the newest LLMs show an alarming tendency to <a href=\"https:\/\/www.sify.com\/ai-analytics\/the-hilarious-and-horrifying-hallucinations-of-ai\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">hallucinate false information<\/a>, obsequiously endorse users\u2019 <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">paranoid theories<\/a>, and slide into dark, <a href=\"https:\/\/www.politico.com\/news\/magazine\/2025\/07\/10\/musk-grok-hitler-ai-00447055\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">edgelord rhetoric<\/a>.<\/p>\n<p>The American public also appears to be growing more pessimistic about AI: in <a href=\"https:\/\/today.yougov.com\/technology\/articles\/51803-americans-increasingly-skeptical-about-ai-artificial-intelligence-effects-poll\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">a poll this year<\/a>, only 30 percent of respondents said that they thought the technology would have a positive effect on society, while 40 percent expect a negative impact. Colorado and other states have passed or proposed laws aimed at preventing AI\u2019s supposed \u201c<a href=\"https:\/\/reason.com\/2025\/08\/15\/colorados-ai-law-is-a-cautionary-tale-for-the-nation\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">algorithmic bias<\/a>\u201d against protected groups. A pending <a href=\"https:\/\/telehealth.org\/blog\/california-telehealth-policy-2025-what-to-know-about-sb-503-ai-and-other\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">California bill<\/a> would subject AI developers to annual audits and require health-care providers to police their AI tools for vaguely defined \u201cbiased impacts.\u201d These regulatory regimes threaten to hobble AI startups in response to a threat that, so far, seems best addressed through better AI design and training. The European Union\u2019s blanket of AI regulations may help explain why the EU lags so far behind the U.S. and China in AI <a href=\"https:\/\/arapackelaw.com\/patents\/ai-patents-by-country\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">patent applications<\/a>.<\/p>\n<p>Some concerns about AI in health care do merit attention. We don\u2019t want doctors and nurses led astray by LLM hallucinations, for example. But advocates for aggressive AI regulation should recall the Hippocratic admonition: first, do no harm. For now, the benefits of AI tools in medicine appear to outweigh the risks dramatically. Even if the grandiose predictions of future AI capabilities never materialize, today\u2019s real-world AI tools already show the potential to reshape medicine for the better. In many cases, those benefits will prove lifesaving.<\/p>\n<p class=\"has-drop-cap\">Martinez was fading as orderlies wheeled him into radiology. The ER team was leaning toward a diagnosis of myocardial infarction and had already alerted the hospital\u2019s cath lab to prepare for an emergency coronary catheterization. But they needed radiology to confirm the hunch and to rule out a stroke, as well. The radiologist was at her workstation as images loaded on the screen. First came the CT scans of the head. They looked fine. Then the technician injected a contrast dye and began a series of chest scans. The radiologist searched for the telltale signs of coronary artery blockage. But the blood flow to the heart appeared normal. Tricky case, she thought.<\/p>\n<p>As the radiologist looked for clues, a system developed by <a href=\"https:\/\/www.aidoc.com\/solutions\/radiology\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Aidoc<\/a>, a Tel Aviv\u2013based AI company, was also reviewing the images. Through a process of <a href=\"https:\/\/www.ibm.com\/think\/topics\/machine-learning\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">machine learning<\/a>, the system had taught itself to recognize obvious, as well as extremely subtle, patterns linked to conditions such as pulmonary embolisms or cancerous lesions. As images flowed into the database, the Aidoc system suddenly pinged an alert. It brought one slide to the top of the queue and highlighted a thin, barely visible line inside the patient\u2019s ascending aorta. The cardiologist looked closely and confirmed the system\u2019s preliminary finding. She got on the phone to the ER. \u201cAlert surgery,\u201d she said. \u201cIt\u2019s an aortic dissection, Type A.\u201d<\/p>\n<p>Martinez had gotten another break. An aortic dissection is a partial breakdown of the aorta wall that can rapidly progress to a fatal rupture. Of patients who don\u2019t receive a timely diagnosis and surgery, roughly half will be <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC10871943\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">dead within 24 hours<\/a>. Martinez was prepped for cardiothoracic surgery.<\/p>\n<p>Today, most leading AI systems depend on some version of machine learning (ML) or, more specifically, deep learning. An ML system learning to identify, say, tree species, doesn\u2019t need to be programmed with specific information about bark or leaf types; it simply digests images labeled \u201cwhite oak\u201d or \u201csugar maple\u201d and learns their distinguishing features. (Sometimes, humans step in to \u201creinforce\u201d correct answers and flag the bad ones.) This kind of deep-learning pattern recognition is used in many of today\u2019s medical AI applications, including reading radiological scans, predicting medical issues such as heart attacks, and in drug discovery.<\/p>\n<p>But as anyone who has used an AI chatbot knows, the latest LLMs offer much broader capabilities. While also rooted in machine learning, LLMs ingest huge quantities of written material, learn the likely word patterns used in various subject areas, and generate original content based on those underlying connections. With their ability to engage in human-like dialogue, these generative AI systems can help doctors and nurses with record-keeping and patient communication, even offering clinical advice.<\/p>\n<p>Each of these broad approaches to medical AI offers distinct strengths and risks. Chad McClennan is president and CEO of the medical imaging company <a href=\"https:\/\/koiosmedical.com\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Koios<\/a>, which helped pioneer the use of ML to diagnose breast and thyroid cancer using ultrasound images. Unlike an LLM chatbot, a task-specific ML \u201cdoesn\u2019t improvise,\u201d McClennan told me. He also stressed that the Koios system isn\u2019t meant to replace the judgment of the radiologist but instead to offer a kind of virtual second opinion. It also provides a backstop against simple errors. \u201cIt\u2019s like the way spell-check alerts you when you\u2019ve typed \u2018your\u2019 instead of \u2018you\u2019re,\u2019 \u201d McClennan said. In complex cases, Koios software can also augment the physician\u2019s necessarily limited experience, he added, since its training sources include \u201cunique outlier cases that AI never forgets.\u201d<\/p>\n<p>\u201cWe should stop training radiologists right now,\u201d British AI researcher Geoffrey Hinton famously <a href=\"https:\/\/www.youtube.com\/watch?v=2HMPRXstSvQ\" rel=\"nofollow noopener\" target=\"_blank\">procla<\/a><a href=\"https:\/\/www.youtube.com\/watch?v=2HMPRXstSvQ\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">imed in 2<\/a><a href=\"https:\/\/www.youtube.com\/watch?v=2HMPRXstSvQ\" rel=\"nofollow noopener\" target=\"_blank\">016<\/a>. \u201cIt\u2019s just completely obvious that, within five years, deep learning is going to do better than radiologists.\u201d Since then, a number of ML radiology platforms have outperformed human radiologists in controlled tests. But this year, the Nobel Prize\u2013winning scientist walked back his 2016 statement, telling the <a href=\"https:\/\/www.nytimes.com\/2025\/05\/14\/technology\/ai-jobs-radiologists-mayo-clinic.html?unlocked_article_code=1.HE8.Lxyh.vXIfan2VOGRr&amp;smid=nytcore-ios-share&amp;referringSource=articleShare\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">New York Times<\/a> that he had spoken too broadly. Rather than putting physicians out of work, AI was making \u201cradiologists a whole lot more efficient in addition to improving accuracy,\u201d he said. Indeed, in hospitals around the world, systems from companies including Koios and Aidoc aren\u2019t just helping radiologists make better diagnoses; they\u2019re also speeding up the scanning process, creating documentation needed for medical records, and simplifying workflows.<\/p>\n<p>Nonetheless, helping doctors make the most of AI diagnostic tools turns out to be harder than expected. A recent <a href=\"https:\/\/economics.mit.edu\/sites\/default\/files\/2023-07\/agarwal-et-al-diagnostic-ai.pdf\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Harvard\u2013MIT<\/a> study tested how physicians responded to AI analyses of chest X-rays that contradicted their own judgments. In the study, the AI system achieved 92 percent accuracy when working alone. Radiologists working unaided achieved a less impressive 74 percent accuracy rate. But when the radiologists combined the AI results with their own judgment, they reached an accuracy level of only 76 percent.<\/p>\n<p>In a New York Times <a href=\"https:\/\/www.nytimes.com\/2025\/02\/02\/opinion\/ai-doctors-medicine.html?unlocked_article_code=1.t04.AeZg.kT0qka6kerAi&amp;smid=url-share\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">op-ed<\/a>, Topol and a coauthor argued that the findings indicate that \u201cright now, simply giving physicians AI tools and expecting automatic improvements doesn\u2019t work.\u201d Research into how physicians actually use AI reveals several pitfalls. Sometimes, as in the Harvard\u2013MIT study, doctors undervalue the AI input and fall back on their own flawed instincts. But on the flip side, Topol told me, there\u2019s the risk of automation bias, which he defines as \u201cthe human tendency to over-rely on AI and to ignore contradictory information, even when it is correct.\u201d Another worry is that doctors will experience \u201cdeskilling\u201d after learning to rely on AI. One study suggested that doctors who used an AI-assisted endoscopy tool for three months became less adept at finding precancerous polyps when performing colonoscopies unaided.<\/p>\n<p>These dilemmas don\u2019t mean that AI won\u2019t dramatically improve radiology and other medical specialties. But they suggest that integrating humans and machines will require a systematic effort. To start, we need \u201cmore and better AI training for physicians,\u201d Topol said.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/A-Mayo-Clinic-radiologist-working-with-an-AI-tool-that-saves-15-to-30-minutes-per-examination.jpg\" alt=\"\" class=\"wp-image-36332\"\/>A Mayo Clinic radiologist working with an AI tool that saves 15 to 30 minutes per examination (Jenn Ackermann\/The New York Times\/Redux)<\/p>\n<p class=\"has-drop-cap\">Martinez woke up in the ICU. The rapid AI diagnosis of his aortic dissection had allowed a fast transition to cardiothoracic surgery, likely saving his life. But he wasn\u2019t out of the woods. Martinez\u2019s recovery would depend on diligent care from the ICU nurses and physicians.<\/p>\n<p>In some ways, a modern ICU can be seen as an elaborate data hub: the patient\u2019s vital signs are monitored by several devices, while nurses and doctors double as data-entry workers, tracking every change in the patient\u2019s condition, every dose of medication, and every procedure for which the hospital will need to seek reimbursement. Fortunately, the team working with Martinez had several AI tools to help with these tasks. A system developed by the Boston-based firm <a href=\"https:\/\/www.etiometry.com\/the-etiometry-platform\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Etiometry<\/a> displayed all the crucial data\u2014vital signs, medications, fluid levels, and more\u2014on a single screen at his bedside. A related package from the company tracked key data using an FDA-approved AI program designed to predict patient deterioration. If Martinez\u2019s condition started to slide, the nurses should get an alert before the problem turned perilous. Another AI program helped automate the input of the copious data that nurses were expected to collect, including the codes needed for later billing.<\/p>\n<p>Martinez\u2019s wife had previously been at the bedside of other family members in ICUs. She noticed how the nurses in this unit seemed a bit less harried. They spent less time clicking through forms on their tablets and more time talking to the patient\u2014and reassuring her that her husband\u2019s recovery was on track.<\/p>\n<p>The U.S. health-care system is far better overall than critics claim. But anyone who has spent much time in U.S. hospitals knows that even elite institutions leave much to be desired. Patients entering the system through the ER often spend long periods\u2014sometimes <a href=\"https:\/\/www.healthaffairs.org\/doi\/abs\/10.1377\/hlthaff.2024.01513?journalCode=hlthaff\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">more than 24 hours<\/a>\u2014waiting for a room to open up. Medical workers widely report feeling stressed and burned out. In a 2023 study published by the <a href=\"https:\/\/www.mayoclinicproceedings.org\/article\/S0025-6196(24)00668-2\/fulltext\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Mayo Clinic<\/a>, 45 percent of physicians reported experiencing at least one burnout symptom. Perhaps it is no surprise that, in a <a href=\"https:\/\/www.mckinsey.com\/industries\/healthcare\/our-insights\/the-physician-shortage-isnt-going-anywhere\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">2024 McKinsey &amp; Co. survey<\/a>, 35 percent of practicing physicians said that they are likely to leave their current roles in the next five years. Worse yet, medical errors remain a significant problem. Some estimates of death rates due to medical mistakes are wildly inflated, but a <a href=\"https:\/\/sciencebasedmedicine.org\/are-medical-errors-really-the-third-most-common-cause-of-death-in-the-u-s-2019-edition\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">judicious accounting<\/a> suggests that more than 100,000 patients annually suffer some \u201cadverse effect of medical treatment\u201d that contributes, at least partially, to their deaths.<\/p>\n<p>While not a silver bullet, AI can help address these problems. New York\u2019s Mount Sinai hospital system is a pioneer in finding ways to integrate AI into hospital operations. Mount Sinai\u2019s chief digital transformation officer, Robbie Freeman, is <a href=\"https:\/\/www.healthaffairs.org\/doi\/abs\/10.1377\/hlthaff.2024.01513?journalCode=hlthaff\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">helping test<\/a> a machine-learning algorithm designed to determine which ER patients will require a hospital bed. \u201cWe want to minimize what we call \u2018boarding\u2019 in the emergency department, patients waiting for a room to open up,\u201d said Freeman, whose background includes both a doctorate in nursing practice and an MBA. By predicting admissions earlier, the hospital can more quickly adjust staffing levels and other resources to ensure that the right beds are available. \u201cWhen we can plan better,\u201d he added, \u201cwe can help move patients through in a way that\u2019s best for them.\u201d<\/p>\n<p>Another ML algorithm, being developed at Mount Sinai\u2019s Icahn School of Medicine, forecasts which patients are most likely to develop delirium, a serious complication that can lead to combative behavior\u2014and ultimately to higher mortality. In a <a href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2833621\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">study<\/a> published this year, the algorithm achieved a fourfold improvement over traditional clinical approaches. Patients at high risk of delirium can receive modified treatment protocols\u2014for example, being given lower doses of sedatives. Other hospitals are using similar tools to predict heart attacks and other patient crises.<\/p>\n<p>Medical ML applications such as these have been rolling out gradually over the past three decades. As noted, most are focused on reading scans or other very specific tasks. But the 2022 release of OpenAI\u2019s ChatGPT platform (soon followed by Google\u2019s Gemini, Meta AI, and other LLMs) opened up a new vista of possibility. LLM platforms enable open-ended generative AI. Users can ask them anything, and they do. I talked with one oncologist who routinely uses ChatGPT to draft letters to insurance companies seeking preauthorization to cover the cost of expensive new drugs. \u201cI\u2019ll tell it, \u2018Here are the patient\u2019s medical details. Please compose a letter explaining why she needs X medication, and please cite these five journal articles supporting this claim,\u2019 \u201d he said. The process saves him hours each week. That\u2019s time he can use talking to patients instead of insurance companies. Many nurses, too, are turning to chatbots to help with routine work, such as transcribing patient-intake interviews.<\/p>\n<p>But health-care workers need more than informal solutions. Over the years, clinical data and billing codes have grown more complex, and efforts to digitize record-keeping have forced doctors and nurses to spend their days clicking boxes on screens. A <a href=\"https:\/\/www.aha.org\/news\/headline\/2016-09-08-study-physicians-spend-nearly-twice-much-time-ehrdesk-work-patients\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">2016 study<\/a> published in the Annals of Internal Medicine found that doctors spent only 27 percent of their time in clinical face time with patients and 49 percent of their day on digital paperwork. \u201cIn our quest for the ultimate clinical information system, we created a mess,\u201d one longtime hospital chief information officer told me.<\/p>\n<p>In their book The AI Revolution in Medicine: GPT-4 and Beyond (2023), physicians Carey Goldberg and Isaac Kohane, along with Microsoft\u2019s Peter Lee, explored how LLM chatbots could help untangle this data dilemma. They envisioned a new era of \u201csymbiotic medicine,\u201d with the physician and an AI assistant working as partners. GPT-4 was particularly useful as a record-keeping assistant and as a \u201cuniversal translator\u201d between different medical data standards. For example, Medicare requires that data be recorded in the FHIR (Fast Healthcare Interoperability Resources) standard. In their tests, GPT-4 was \u201cable to convert health data both into and out of FHIR.\u201d<\/p>\n<p>\u201cGPT-4 appears to be a real game changer\u201d in automating clinical documentation, the authors write. Mount Sinai and other hospitals are experimenting with ambient AI, systems that monitor patient interviews (with the patient\u2019s permission) and then convert those conversations into clinical notes. \u201cThis can reduce what we call \u2018pajama time,\u2019 \u201d Freeman said. \u201cThat\u2019s the time after work when our clinical teams are catching up on their documentation.\u201d<\/p>\n<p>LLMs can even provide sophisticated analysis of tricky medical conditions. In The AI Revolution, Kohane writes that he was stunned to find GPT-4 giving clinical guidance \u201cbetter than many doctors I\u2019ve observed.\u201d But not always. The AI Revolution authors also observed the chatbot making mistakes that included \u201chighly convincing fabrications, omissions, and even negligence.\u201d Before such a system can be trusted as a clinical advisor, they argued, researchers will need to \u201cfind a path to trusting, but always verifying\u201d the chatbot\u2019s outputs. They ask, \u201cHow can we reap its benefits\u2014speed, scale, and scope of analysis\u2014while keeping it subordinate to the judgment, experience, and empathy of human doctors?\u201d<\/p>\n<p>Mount Sinai\u2019s Eyal Klang has been asking the same question. Klang is director of Mount Sinai\u2019s Generative AI Research Program and one of the authors of a <a href=\"https:\/\/www.nature.com\/articles\/s43856-025-01021-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">2024 study<\/a> that looked at how ChatGPT and other LLMs might perform in the role of clinical assistant. To see if they could nudge the LLMs into hallucinating, the study\u2019s authors created medical vignettes that each included a single imaginary medical term, such as \u201cFaulkenstein Syndrome\u201d or \u201cRenal Stormblood Rebound Echo.\u201d Alarmingly, Klang said, \u201cAbout 50 percent of the time, [the chatbot] happily went and elaborated on this funny science that doesn\u2019t exist.\u201d <a href=\"https:\/\/ai.nejm.org\/doi\/full\/10.1056\/AIdbp2300040\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Another study<\/a> that Klang worked on found that LLMs often generated vague or fabricated information when asked to translate notes into standard medical billing codes.<\/p>\n<p>These are not trivial problems. Before doctors and nurses can rely on LLMs for clinical advice, or even for handling routine paperwork, these quirks must be solved. Kohane puts it bluntly: \u201cFor the foreseeable future, GPT-4 cannot be used in medical settings without direct human supervision.\u201d Fortunately, The AI Revolution authors, Klang, and others are already developing solutions. \u201cJust telling it, \u2018Please do not hallucinate\u2019 helps a lot,\u201d Klang said with a trace of amusement. Better yet, he added, \u201cYou can instruct the agent to double-check itself.\u201d Of course, doctors won\u2019t always rely on general-purpose chatbots. Finding ways to automate hallucination detection will be a key task for companies developing proprietary LLM platforms for clinical advice.<\/p>\n<p>Virtually everyone who has studied LLMs in medical applications agrees that this technology will be a force multiplier for overworked doctors and nurses. AI platforms should also help level the playing field for rural or underfinanced hospitals that lack cutting-edge medical expertise.<\/p>\n<p>First, though, the bugs need to be discovered and worked out. The FDA can play a role in this process. (The FDA has been \u201cfar too permissive\u201d in regulating AI algorithms, Topol told me; it should insist on more transparency and more published data.) But lawmakers must resist the temptation to address these AI limitations through premature regulation. The best way to avoid AI pitfalls is to continue the kind of research that Klang and others are pursuing. Then, health-care organizations must develop and document\u2014and continuously monitor\u2014best practices for using AI.<\/p>\n<p class=\"has-drop-cap\">After four days in the ICU and another week in a cardiac-care unit, Diego Martinez is leaving the hospital. But the hospital isn\u2019t leaving him. On his upper arm, he wears an <a href=\"https:\/\/www.fiercehealthcare.com\/tech\/ai-wearable-device-for-home-care-gets-fda-clearance\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">AI-enabled cuff<\/a> that will remotely monitor his blood pressure, blood oxygen, and other vitals. If his condition starts to slip, his doctors will probably know it before he does. Martinez faces months of rehab, but he is looking forward to sitting with his wife at breakfast and dreaming about\u2014just maybe\u2014being ready for softball season next year.<\/p>\n<p>The AI medical revolution won\u2019t be confined to hospitals. Generative AI is already transforming drug development, with several AI-designed molecules now in the pipeline. If these drugs prove effective in clinical trials, new treatments for <a href=\"https:\/\/www.news-medical.net\/news\/20240425\/Insilico-Medicines-AI-designed-drug-ISM3412-receives-FDA-IND-approval.aspx\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">various cancers<\/a>, <a href=\"https:\/\/pharmaphorum.com\/news\/exscientia-starts-clinical-trials-of-ai-designed-alzheimers-drug\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Alzheimer\u2019s<\/a>, and <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC11150274\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">drug-resistant infections<\/a> could reach patients more quickly than drugs developed by traditional means. LLMs also promise to streamline telehealth platforms, taking a burden off physicians struggling to keep up with the growing flood of text-based \u201casynchronous communications\u201d with patients. Even dentists will benefit as new AI platforms help their office staffs manage today\u2019s tangled billing processes.<\/p>\n<p>And because ML systems can see patterns that human researchers have yet to discern, AI will help doctors detect\u2014and someday, perhaps, prevent\u2014diseases like Alzheimer\u2019s and cancer years before they become clinically observable.<\/p>\n<p>It will not be a simple task to integrate the strengths of AI with the skills and judgment of human medical workers. But if we can fully exploit AI\u2019s benefits, learn to temper its risks\u2014and head off efforts to overregulate the AI revolution before it starts\u2014a more effective, more humane vision of health care is within reach.<\/p>\n<p><a href=\"https:\/\/www.city-journal.org\/person\/james-b-meigs\" target=\"_blank\" rel=\"noopener nofollow\">James B. Meigs<\/a> is a senior fellow at the Manhattan Institute, a contributing editor of City Journal, and the former editor of Popular Mechanics.<\/p>\n<p>Top Photo: An AI-powered machine tests for breast cancer during a clinical trial. (Klaudia Radecka\/NurPhoto\/Getty Images)<\/p>\n","protected":false},"excerpt":{"rendered":"Diego Martinez made a diving catch for the gently descending softball. At 48, he was getting old for&hellip;\n","protected":false},"author":2,"featured_media":359351,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[64,63,137,500],"class_list":{"0":"post-359350","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-au","9":"tag-australia","10":"tag-health","11":"tag-healthcare"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/359350","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=359350"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/359350\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/359351"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=359350"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=359350"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=359350"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}