{"id":218823,"date":"2025-10-22T22:01:06","date_gmt":"2025-10-22T22:01:06","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/218823\/"},"modified":"2025-10-22T22:01:06","modified_gmt":"2025-10-22T22:01:06","slug":"when-sycophancy-and-bias-meet-medicine","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/218823\/","title":{"rendered":"When sycophancy and bias meet medicine"},"content":{"rendered":"<p>Once upon a time, two villagers visited the fabled Mullah Nasreddin. They hoped that the Sufi philosopher, famed for his acerbic wisdom, could mediate a dispute that had driven a wedge between them. Nasreddin listened patiently to the first villager\u2019s version of the story and, upon its conclusion, exclaimed, \u201cYou are absolutely right!\u201d The second villager then presented his case. After hearing him out, Nasreddin again responded, \u201cYou are absolutely right!\u201d An observant bystander, confused by Nasreddin\u2019s proclamations, interjected, \u201cBut Mullah, they can\u2019t both be right.\u201d Nasreddin paused, regarding the bystander for a moment before replying, \u201cYou are absolutely right, too!\u201d<\/p>\n<p>In late May, the White House\u2019s first \u201cMake America Healthy Again\u201d (MAHA) report was criticized for citing multiple research studies that did not exist. Fabricated citations like these are common in the outputs of generative artificial intelligence based on large language models, or LLMs. LLMs have presented plausible-sounding sources, catchy titles, or even false data to craft their conclusions. Here, the White House pushed back on the journalists who first broke the story before admitting to \u201c<a href=\"https:\/\/www.science.org\/content\/article\/trump-officials-downplay-fake-citations-high-profile-report-children-s-health\" rel=\"nofollow noopener\" target=\"_blank\">minor citation errors<\/a>.\u201d<\/p>\n<p>It is ironic that fake citations were used to support a principal recommendation of the MAHA report: addressing the health research sector\u2019s \u201c<a href=\"https:\/\/www.nature.com\/articles\/d41586-024-04253-w\" rel=\"nofollow noopener\" target=\"_blank\">replication crisis<\/a>,\u201d wherein scientists\u2019 findings often cannot be reproduced by other independent teams.<\/p>\n<p>Yet the MAHA report\u2019s use of phantom evidence is far from unique. Last year, The Washington Post reported on <a href=\"https:\/\/www.washingtonpost.com\/nation\/2025\/06\/03\/attorneys-court-ai-hallucinations-judges\/\" rel=\"nofollow noopener\" target=\"_blank\">dozens of instances<\/a> in which AI-generated falsehoods found their way into courtroom proceedings. Once uncovered, lawyers had to explain to judges how fictitious cases, citations, and decisions found their way into trials.<\/p>\n<p>Despite these widely recognized problems, the MAHA roadmap released last month directs the Department of Health and Human Services to prioritize AI research to \u201c\u2026assist in earlier diagnosis, personalized treatment plans, real-time monitoring, and predictive interventions\u2026\u201d This breathless rush to embed AI in so many aspects of medicine could be forgiven if we believe that the technology\u2019s \u201challucinations\u201d will be easy to fix through version updates. But as <a href=\"http:\/\/openai.com\/index\/why-language-models-hallucinate\/\" rel=\"nofollow noopener\" target=\"_blank\">the industry itself acknowledges<\/a>, these ghosts in the machine may be impossible to eliminate.<\/p>\n<p>Consider the implications of accelerating AI use in health research for clinical decision making. Beyond the problems we\u2019re seeing here, using AI in research without disclosure could create a feedback loop, supercharging the very biases that helped motivate its use. Once published, \u201cresearch\u201d based on false results and citations could become part of the datasets used to build future AI systems. Worse still, a <a href=\"https:\/\/www.pnas.org\/doi\/abs\/10.1073\/pnas.2420092122\" rel=\"nofollow noopener\" target=\"_blank\">recently published study<\/a> highlights an industry of scientific fraudsters who could deploy AI to make their claims seem more legitimate.<\/p>\n","protected":false},"excerpt":{"rendered":"Once upon a time, two villagers visited the fabled Mullah Nasreddin. They hoped that the Sufi philosopher, famed&hellip;\n","protected":false},"author":2,"featured_media":218824,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[43],"tags":[102,2960,56,54,55],"class_list":{"0":"post-218823","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-health","9":"tag-healthcare","10":"tag-uk","11":"tag-united-kingdom","12":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/218823","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=218823"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/218823\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/218824"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=218823"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=218823"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=218823"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}