{"id":8869,"date":"2025-09-11T05:46:08","date_gmt":"2025-09-11T05:46:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/8869\/"},"modified":"2025-09-11T05:46:08","modified_gmt":"2025-09-11T05:46:08","slug":"ai-is-revolutionizing-health-care-but-it-cant-replace-your-doctor","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/8869\/","title":{"rendered":"AI Is Revolutionizing Health Care. But It Can\u2019t Replace Your Doctor"},"content":{"rendered":"<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color min-h-[6.375rem] lg:min-h-[4.75rem] dropcap text-left\" data-testid=\"paragraph-content\">The next time you get a blood test, X-ray, <a href=\"https:\/\/time.com\/7027180\/what-to-expect-mammogram-breast-cancer-screening\/\" rel=\"nofollow noopener\" target=\"_blank\">mammogram<\/a>, or <a href=\"https:\/\/time.com\/7009176\/colonoscopy-prep-procedure\/\" rel=\"nofollow noopener\" target=\"_blank\">colonoscopy<\/a>, there\u2019s a good chance an artificial intelligence (AI) algorithm will first interpret the results even before your doctor has seen it.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Over the course of just a few years, AI has spread rapidly into hospitals and clinics around the world. More than 1,000 health-related AI tools have been authorized for use by the U.S. Food and Drug Administration (FDA), and more than 2 in 3 physicians say they use AI to some degree, according to a recent <a href=\"https:\/\/www.ama-assn.org\/practice-management\/digital-health\/2-3-physicians-are-using-health-ai-78-2023?utm_source=chatgpt.com\" rel=\"nofollow noopener\" target=\"_blank\">survey<\/a> by the American Medical Association. The potential is extraordinary. AI\u2014particularly in the form of AI agents that can reason, adapt, and act on their own\u2014can <a href=\"https:\/\/time.com\/7310911\/ambient-ai-doctor-burnout-health-care\/\" rel=\"nofollow noopener\" target=\"_blank\">lighten doctors\u2019 workloads by drafting patient notes<\/a> and chart summaries, support precision medicine through more targeted therapies, and flag subtle abnormalities in scans and slides that a human eye might miss. It can speed discovery of drugs and drug targets through new processes, such as AI-driven protein structure prediction and design that led to last year\u2019s Nobel Prize in Chemistry. AI can give patients faster, more personalized support by scheduling appointments, answering questions, and flagging side effects. It can help match candidates to clinical trials and monitor health data in real time, alerting clinicians and patients early to prevent complications and improve outcomes.\u00a0\u00a0<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">But the promise of AI in medicine will only be realized if it is built and used responsibly.\u00a0<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Today&#8217;s AI algorithms are powerful tools that recognize patterns, predict, and even make decisions. But they are not infallible, all-knowing oracles. Nor are they on the verge of matching human intelligence, despite what some evangelists of so-called artificial general intelligence suggest. A handful of recent studies reflect the possibilities but also the pitfalls, pointing out how medical AI tools can misdiagnose patients and how doctors\u2019 own skills can weaken with AI.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">A team at Duke University (including one of us) <a href=\"https:\/\/www.ajnr.org\/content\/early\/2025\/07\/30\/ajnr.A8946.long\" rel=\"nofollow noopener\" target=\"_blank\">tested<\/a> an FDA-cleared AI tool meant to detect swelling and microbleeds in the brain MRIs of patients with Alzheimer\u2019s disease. The tool improved the ability of expert radiologists to find these subtle spots in an MRI, but it also raised false alarms, often mistaking harmless blurs for something dangerous. We concluded that the tool is helpful, but radiologists should do a careful read of MRIs first, and then use the tool as a second opinion\u2014not the other way around.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">These kinds of findings are not confined to the tool we looked at. Few hospitals are independently assessing the AI tools they use. Many assume that just because a tool has been cleared by the FDA, it will work in their local setting, which is not necessarily true. AI tools work differently for different patient populations, and each has unique weaknesses. That\u2019s why it\u2019s essential for health systems to do due diligence and a quality check before implementation of any AI tool to ensure it will work in that local setting and then educate clinicians. In addition, both AI algorithms and the ways humans interact with them change over time, prompting former FDA commissioner Robert Califf to <a href=\"https:\/\/dcri.org\/news\/acc25-dcri-faculty-share-cardiovascular-ai-and-other-critical-insights\" rel=\"nofollow noopener\" target=\"_blank\">urge<\/a> constant post-market monitoring of medical AI tools to ensure they remain reliable and safe in the real world.\u00a0\u00a0<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">In another recent <a href=\"https:\/\/time.com\/7309274\/ai-lancet-study-artificial-intelligence-colonoscopy-cancer-detection-medicine-deskilling\/?utm_source=chatgpt.com\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a>, gastroenterologists in Europe were given a new AI-assisted system for spotting polyps during colonoscopies. Using the tool, they initially found more polyps\u2014tiny growths that can turn into cancer\u2014suggesting the AI was helping them spot areas they may have otherwise missed. But when the doctors then returned to performing colonoscopies without the AI system, they detected fewer pre-cancerous polyps than before they\u2019d used the AI. Although it\u2019s not clear exactly why, the study\u2019s authors believe clinicians may have become so reliant on AI that in its absence they became less focused and less able to spot these polyps. This phenomenon of \u201cdeskilling\u201d is supported by another <a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC9500006\/\" rel=\"nofollow noopener\" target=\"_blank\">study<\/a> which showed that overreliance on computerized aids may make the human gaze less likely to scan peripheral visual fields. The very tool meant to sharpen medical practice had perhaps blunted it.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">AI, if used uncritically, can not only propagate wrong information, but erode our very ability to fact-check it. It\u2019s the Google Maps effect: drivers who once navigated by memory now often lack basic geographic awareness because they\u2019re used to blindly following the voice in their car. Earlier this year, a researcher surveyed more than 600 people across diverse age groups and educational backgrounds and found that the more someone used AI tools, the weaker their critical-thinking abilities. This is known as \u201ccognitive off-loading,\u201d and we are only just starting to understand how it relates to AI usage by clinicians.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Read More: <a href=\"https:\/\/time.com\/7206222\/taxi-drivers-alzheimers-disease-christopher-worsham-anupam-jena\/\" rel=\"nofollow noopener\" target=\"_blank\">Why Do Taxi Drivers Have a Lower Risk of Alzheimer\u2019s?<\/a><\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">All of this underscores that AI in medicine, as in every field, works best when it augments the work of humans. The future of medicine isn\u2019t about replacing health care providers with algorithms\u2014it\u2019s about designing tools that sharpen human judgment and amplify what we can accomplish. Doctors and other providers must be able to gauge when AI is wrong, and must maintain the ability to work without AI tools if necessary. The way to make this happen is to build medical AI tools responsibly.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">We need tools built on a different paradigm\u2014ones that nudge providers to look again, to weigh alternatives, and to stay actively engaged. This approach is known as Intelligent Choice Architecture (ICA). With ICA, AI systems are designed to support judgment rather than supplant it. Instead of declaring \u201chere is a bleed,\u201d an ICA tool might highlight an area and prompt, \u201ccheck this region carefully.\u201d ICA augments the skills medicine depends on\u2014clinical reasoning, critical thinking, and human judgment.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">Apollo Hospitals, India\u2019s largest private health system, recently began using an ICA <a href=\"https:\/\/www.apollohealthaxis.com\/case-studies\/apollos-aicvd-redefining-cardiovascular-risk-assessment-globally-beyond-conventions\/\" rel=\"nofollow noopener\" target=\"_blank\">tool<\/a> to guide doctors in preventing heart attacks. A previous AI tool had provided a single heart-attack risk score for each patient. The new system provides a more personalized breakdown of what that score means for them and what contributed to it so that the patient knows which risk factors to address. It\u2019s an example of the kind of gentle nudging that can allow doctors to succeed at their jobs without taking over their autonomy.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">There is a temptation to oversell AI as if it has all the answers. In medicine, we must temper these expectations to save lives. We must train medical students to work both with and without AI tools and to treat AI as a second opinion or an assistant rather than an expert with all the right answers. The future is humans and AI agents working together.<\/p>\n<p class=\"rich-text mb-6 self-baseline font-graphik text-body-large text-black-coffee focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">We\u2019ve added tools to medicine before without weakening clinicians\u2019 skills. The stethoscope amplifies the ear without replacing it. Blood tests provide new diagnostic information without eliminating the need for a medical history or physical exams. We should hold AI to the same standard. If a new product makes doctors less observant or less decisive, it\u2019s not ready for prime time, or it\u2019s being used the wrong way.<\/p>\n<p class=\"rich-text self-baseline font-graphik text-body-large text-black-coffee mb-0 focus-visible:outline focus-visible:outline-black-coffee focus-visible:outline-2 focus-visible:outline-offset-2 focus-visible:shadow-focus-color text-left\" data-testid=\"paragraph-content\">For any new medical AI, we should be asking whether it makes the clinician more thoughtful, or less. Does it encourage a second look or invite a rubber stamp? If we commit to designing only those systems that sharpen rather than replace our abilities, we\u2019ll get the best of both worlds, combining the extraordinary promise of AI with the critical thinking, compassion, and real-world judgment that only humans can bring.<\/p>\n","protected":false},"excerpt":{"rendered":"The next time you get a blood test, X-ray, mammogram, or colonoscopy, there\u2019s a good chance an artificial&hellip;\n","protected":false},"author":2,"featured_media":8870,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[9884,163,521,85,46],"class_list":{"0":"post-8869","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-freelance","9":"tag-health","10":"tag-healthcare","11":"tag-il","12":"tag-israel"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/8869","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=8869"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/8869\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/8870"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=8869"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=8869"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=8869"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}