{"id":355511,"date":"2025-12-18T06:19:18","date_gmt":"2025-12-18T06:19:18","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/355511\/"},"modified":"2025-12-18T06:19:18","modified_gmt":"2025-12-18T06:19:18","slug":"researchers-discover-bias-in-ai-models-that-analyze-pathology-samples","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/355511\/","title":{"rendered":"Researchers Discover Bias in AI Models That Analyze Pathology Samples"},"content":{"rendered":"<p>They discovered that all four models had biased performances, providing less accurate diagnoses for patients in specific groups based on self-reported race, gender, and age. For example, the models struggled to differentiate lung cancer subtypes in African American and male patients, and breast cancer subtypes in younger patients. The models also had trouble detecting breast, renal, thyroid, and stomach cancer in certain demographic groups. These performance disparities occurred in around 29 percent of the diagnostic tasks that the models conducted. <\/p>\n<p>This diagnostic inaccuracy, Yu said, happens because these models extract demographic information from the slides \u2014 and rely on demographic-specific patterns to make a diagnosis. <\/p>\n<p>The results were unexpected \u201cbecause we would expect pathology evaluation to be objective,\u201d Yu added. \u201cWhen evaluating images, we don\u2019t necessarily need to know a patient\u2019s demographics to make a diagnosis.\u201d<\/p>\n<p>The team wondered: Why didn\u2019t pathology AI show the same objectivity?<\/p>\n<p>Searching for explanations<\/p>\n<p>The researchers landed on three explanations.<\/p>\n<p>Because it is easier to get samples for patients in certain demographic groups, the AI models are trained on unequal sample sizes. As a result, the models have a harder time making an accurate diagnosis in samples that aren\u2019t well-represented in the training set, such as those from minority groups based on race, age, or gender.<\/p>\n<p>Yet \u201cthe problem turned out to be much deeper than that,\u201d Yu said. The researchers noticed that sometimes the models performed worse in one demographic group, even when the sample sizes were comparable. <\/p>\n<p>Additional analyses revealed that this may be because of differential disease incidence: Some cancers are more common in certain groups, so the models become better at making a diagnosis in those groups. As a result, the models may have difficulty diagnosing cancers in populations where they aren\u2019t as common. <\/p>\n<p>The AI models also pick up on subtle molecular differences in samples from different demographic groups. For example, the models may detect mutations in cancer driver genes and use them as a proxy for cancer type \u2014 and thus be less effective at making a diagnosis in populations in which these mutations are less common. <\/p>\n<p>\u201cWe found that because AI is so powerful, it can differentiate many obscure biological signals that cannot be detected by standard human evaluation,\u201d Yu said. <\/p>\n<p>As a result, the models may learn signals that are more related to demographics than disease. That, in turn, could affect their diagnostic ability across groups. <\/p>\n<p>Together, Yu said, these explanations suggest that bias in pathology AI stems not only from the variable quality of the training data but also from how researchers train the models. <\/p>\n<p>Finding a fix<\/p>\n<p>After assessing the scope and sources of the bias, Yu and his team wanted to fix it.<\/p>\n<p>The researchers developed FAIR-Path, a simple framework based on an existing machine-learning concept called contrastive learning. Contrastive learning involves adding an element to AI training that teaches the model to emphasize the differences between essential categories \u2014 in this case, cancer types \u2014 and to downplay the differences between less crucial categories \u2014 here, demographic groups. <\/p>\n<p>When the researchers applied the FAIR-Path framework to the models they\u2019d tested, it reduced the diagnostic disparities by around 88 percent. <\/p>\n<p>\u201cWe show that by making this small adjustment, the models can learn robust features that make them more generalizable and fairer across different populations,\u201d Yu said. <\/p>\n<p>The finding is encouraging, he added, because it suggests that bias can be reduced even without training the models on completely fair, representative data. <\/p>\n<p>Next, Yu and his team are collaborating with institutions around the world to investigate the extent of bias in pathology AI in places with different demographics and clinical and pathology practices. They are also exploring ways to extend FAIR-Path to settings with limited sample sizes. Additionally, they would like to investigate how bias in AI contributes to demographic discrepancies in health care and patient outcomes. <\/p>\n<p>Ultimately, Yu said, the goal is to create fair, unbiased pathology AI models that can improve cancer care by helping human pathologists quickly and accurately make a diagnosis. <\/p>\n<p>\u201cI think there\u2019s hope that if we are more aware of and careful about how we design AI systems, we can build models that perform well in every population,\u201d he said.<\/p>\n<p>The future of federally funded research at Harvard Medical School \u2014 supported by taxpayers and done in service to humanity \u2014 remains uncertain. <a href=\"https:\/\/hms.harvard.edu\/research\/threats-research-funding-harvard-medical-school\" data-entity-type=\"external\" rel=\"nofollow noopener\" target=\"_blank\">Learn more.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"They discovered that all four models had biased performances, providing less accurate diagnoses for patients in specific groups&hellip;\n","protected":false},"author":2,"featured_media":355512,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34],"tags":[64,63,137,500],"class_list":{"0":"post-355511","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-au","9":"tag-australia","10":"tag-health","11":"tag-healthcare"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/355511","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=355511"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/355511\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/355512"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=355511"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=355511"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=355511"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}