{"id":154476,"date":"2025-09-23T03:37:13","date_gmt":"2025-09-23T03:37:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/154476\/"},"modified":"2025-09-23T03:37:13","modified_gmt":"2025-09-23T03:37:13","slug":"if-a-i-can-diagnose-patients-what-are-doctors-for","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/154476\/","title":{"rendered":"If A.I. Can Diagnose Patients, What Are Doctors For?"},"content":{"rendered":"<p class=\"paywall\">It seems inevitable that the future of medicine will involve A.I., and medical schools are already encouraging students to use large language models. \u201cI\u2019m worried these tools will erode my ability to make an independent diagnosis,\u201d Benjamin Popokh, a medical student at University of Texas Southwestern, told me. Popokh decided to become a doctor after a twelve-year-old cousin died of a brain tumor. On a recent rotation, his professors asked his class to work through a case using A.I. tools such as ChatGPT and OpenEvidence, an increasingly popular medical L.L.M. that provides free access to health-care professionals. Each chatbot correctly diagnosed a blood clot in the lungs. \u201cThere was no control group,\u201d Popokh said, meaning that none of the students worked through the case unassisted. For a time, Popokh found himself using A.I. after virtually every patient encounter. \u201cI started to feel dirty presenting my thoughts to attending physicians, knowing they were actually the A.I.\u2019s thoughts,\u201d he told me. One day, as he left the hospital, he had an unsettling realization: he hadn\u2019t thought about a single patient independently that day. He decided that, from then on, he would force himself to settle on a diagnosis before consulting artificial intelligence. \u201cI went to medical school to become a real, capital-\u2018D\u2019 doctor,\u201d he told me. \u201cIf all you do is plug symptoms into an A.I., are you still a doctor, or are you just slightly better at prompting A.I. than your patients?\u201d<\/p>\n<p class=\"has-dropcap has-dropcap__lead-standard-heading paywall\">A few weeks after the CaBot demonstration, Manrai gave me access to the model. It was trained on C.P.C.s from The New England Journal of Medicine; I first tested it on cases from the JAMA network, a family of leading medical journals. It made accurate diagnoses of patients with a variety of conditions, including rashes, lumps, growths, and muscle loss, with a small number of exceptions: it mistook one type of tumor for another and misdiagnosed a viral mouth ulcer as cancer. (ChatGPT, in comparison, misdiagnosed about half the cases I gave it, mistaking cancer for an infection and an allergic reaction for an autoimmune condition.) Real patients do not present as carefully curated case studies, however, and I wanted to see how CaBot would respond to the kinds of situations that doctors actually encounter.<\/p>\n<p class=\"paywall\">I gave CaBot the broad stokes of what Matthew Williams had experienced: bike ride, dinner, abdominal pain, vomiting, two emergency-department visits. I didn\u2019t organize the information in the way that a doctor would. Alarmingly, when CaBot generated one of its crisp presentations, the slides were full of made-up lab values, vital signs, and exam findings. \u201cAbdomen looks distended up top,\u201d the A.I. said, incorrectly. \u201cWhen you rock him gently, you hear that classic succussion splash\u2014liquid sloshing in a closed container.\u201d CaBot even conjured up a report of a CT scan that supposedly showed Williams\u2019s bloated stomach. It arrived at a mistaken diagnosis of gastric volvulus: a twisting of the stomach, not the bowel.<\/p>\n<p class=\"paywall\">I tried giving CaBot a formal summary of Williams\u2019s second emergency visit, as detailed by the doctors who saw him, and this produced a very different result\u2014presumably because they had more data, sorted by salience. The patient\u2019s hemoglobin level had plummeted; his white cells, or leukocytes, had multiplied; he was doubled over in pain. This time, CaBot latched on to the pertinent data and did not seem to make anything up. \u201cStrangulation indicators\u2014constant pain, leukocytosis, dropping hemoglobin\u2014are all flashing at us,\u201d it said. CaBot diagnosed an obstruction in the small intestines, possibly owing to volvulus or a hernia. \u201cGet surgery involved early,\u201d it said. Technically, CaBot was slightly off the mark: Williams\u2019s problem arose in the large, not the small, intestine. But the next steps would have been virtually identical. A surgeon would have found the intestinal knot.<\/p>\n<p class=\"paywall\">Talking to CaBot was both empowering and unnerving. I felt as though I could now receive a second opinion, in any specialty, anytime I wanted. But only with vigilance and medical training could I take full advantage of its abilities\u2014and detect its mistakes. A.I. models can sound like Ph.D.s, even while making grade-school errors in judgment. Chatbots can\u2019t examine patients, and they\u2019re known to struggle with open-ended queries. Their output gets better when you emphasize what\u2019s most important, but most people aren\u2019t trained to sort symptoms in that way. A person with chest pain might be experiencing acid reflux, inflammation, or a heart attack; a doctor would ask whether the pain happens when they eat, when they walk, or when they\u2019re lying in bed. If the person leans forward, does the pain worsen or lessen? Sometimes we listen for phrases that dramatically increase the odds of a particular condition. \u201cWorst headache of my life\u201d may mean brain hemorrhage; \u201ccurtain over my eye\u201d suggests a retinal-artery blockage. The difference between A.I. and earlier diagnostic technologies is like the difference between a power saw and a hacksaw. But a user who\u2019s not careful could cut off a finger.<\/p>\n<p class=\"has-dropcap has-dropcap__lead-standard-heading paywall\">Attend enough clinicopathological conferences, or watch enough episodes of \u201cHouse,\u201d and every medical case starts to sound like a mystery to be solved. Lisa Sanders, the doctor at the center of the Times Magazine column and Netflix series \u201cDiagnosis,\u201d has compared her work to that of Sherlock Holmes. But the daily practice of medicine is often far more routine and repetitive. On a rotation at a V.A. hospital during my training, for example, I felt less like Sherlock than like Sisyphus. Virtually every patient, it seemed, presented with some combination of emphysema, heart failure, diabetes, chronic kidney disease, and high blood pressure. I became acquainted with a new phrase\u2014\u201clikely multifactorial,\u201d which meant that there were several explanations for what the patient was experiencing\u2014and I looked for ways to address one condition without exacerbating another. (Draining fluid to relieve an overloaded heart, for example, can easily dehydrate the kidneys.) Sometimes a precise diagnosis was beside the point; a patient might come in with shortness of breath and low oxygen levels and be treated for chronic obstructive pulmonary disease, heart failure, and pneumonia. Sometimes we never figured out which had caused a given episode\u2014yet we could help the patient feel better and send him home. Asking an A.I. to diagnose him would not have offered us much clarity; in practice, there was no neat and satisfying solution.<\/p>\n<p class=\"paywall\">Tasking an A.I. with solving a medical case makes the mistake of \u201cstarting with the end,\u201d according to Gurpreet Dhaliwal, a physician at the University of California, San Francisco, whom the Times once described as \u201cone of the most skillful clinical diagnosticians in practice.\u201d In Dhaliwal\u2019s view, doctors are better off asking A.I. for help with \u201cwayfinding\u201d: instead of asking what sickened a patient, a doctor could ask a model to identify trends in the patient\u2019s trajectory, along with important details that the doctor might have missed. The model would not give the doctor orders to follow; instead, it might alert her to a recent study, propose a helpful blood test, or unearth a lab result in a decades-old medical record. Dhaliwal\u2019s vision for medical A.I. recognizes the difference between diagnosing people and competently caring for them. \u201cJust because you have a Japanese-English dictionary in your desk doesn\u2019t mean you\u2019re fluent in Japanese,\u201d he told me.<\/p>\n<p><a class=\"external-link responsive-cartoon__image-link\" data-event-click=\"{&quot;element&quot;:&quot;ExternalLink&quot;,&quot;outgoingURL&quot;:&quot;https:\/\/www.newyorker.com\/cartoon\/a61325&quot;}\" href=\"https:\/\/www.newyorker.com\/cartoon\/a61325\" rel=\"nofollow noopener\" target=\"_blank\"><\/a><\/p>\n<p>\u201cI don\u2019t care what they call it\u2014I need my iced coffee to be at least this tall.\u201d<\/p>\n<p>Cartoon by Lauren Simkin Berke<\/p>\n<p class=\"paywall\">CaBot remains experimental, but other A.I. tools are already shaping patient care. ChatGPT is blocked on my hospital\u2019s network, but I and many of my colleagues use OpenEvidence. The platform has licensing agreements with top medical journals and says it complies with the patient-privacy law HIPAA. Each of its answers cites a set of peer- reviewed articles, sometimes including an exact figure or a verbatim quote from a relevant paper, to prevent hallucinations. When I gave OpenEvidence a recent case, it didn\u2019t immediately try to solve the mystery but, rather, asked me a series of clarifying questions.<\/p>\n","protected":false},"excerpt":{"rendered":"It seems inevitable that the future of medicine will involve A.I., and medical schools are already encouraging students&hellip;\n","protected":false},"author":2,"featured_media":154477,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[43],"tags":[70744,102,2960,6200,38658,6199,56,54,55],"class_list":{"0":"post-154476","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-brave-new-world-dept","9":"tag-health","10":"tag-healthcare","11":"tag-magazine","12":"tag-onecolumnnarrow","13":"tag-splitscreenimagerightfullbleed","14":"tag-uk","15":"tag-united-kingdom","16":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/154476","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=154476"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/154476\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/154477"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=154476"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=154476"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=154476"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}