{"id":428421,"date":"2026-02-16T09:31:07","date_gmt":"2026-02-16T09:31:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/428421\/"},"modified":"2026-02-16T09:31:07","modified_gmt":"2026-02-16T09:31:07","slug":"google-puts-users-at-risk-by-downplaying-health-disclaimers-under-ai-overviews-google","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/428421\/","title":{"rendered":"Google puts users at risk by downplaying health disclaimers under AI Overviews | Google"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may be wrong.<\/p>\n<p class=\"dcr-130mj7b\">When answering queries about sensitive topics such as health, the company says its AI Overviews, which appear above search results, prompt users to seek professional help, rather than relying solely on its summaries. \u201cAI Overviews will inform people when it\u2019s important to seek out expert advice or to verify the information presented,\u201d <a href=\"https:\/\/search.google\/pdf\/google-about-AI-overviews-AI-Mode.pdf\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Google has said<\/a>.<\/p>\n<p class=\"dcr-130mj7b\">But the Guardian found the company does not include any such disclaimers when users are first presented with medical advice.<\/p>\n<p class=\"dcr-130mj7b\">Google only issues a warning if users choose to request additional health information and click on a button called \u201cShow more\u201d. Even then, safety labels only appear below all of the extra medical advice assembled using generative AI, and in a smaller, lighter font.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThis is for informational purposes only,\u201d the disclaimer tells users who click through for further details after seeing the initial summary, and navigate their way to the very end of the AI Overview. \u201cFor medical advice or a diagnosis, consult a professional. AI responses may include mistakes.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Google did not deny its disclaimers fail to appear when users are first served medical advice, or that they appear below AI Overviews and in a smaller, lighter font. AI Overviews \u201cencourage people to seek professional medical advice\u201d, and frequently mention seeking medical attention within the summary itself \u201cwhen appropriate\u201d, a spokesperson said.<\/p>\n<p class=\"dcr-130mj7b\">AI experts and patient advocates presented with the Guardian\u2019s findings said they were concerned. Disclaimers serve a vital purpose, they said, and should appear prominently when users are first provided with medical advice.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThe absence of disclaimers when users are initially served medical information creates several critical dangers,\u201d said Pat Pataranutaporn, an assistant professor, technologist and researcher at the Massachusetts Institute of Technology (MIT) and a world-renowned expert in AI and human-computer interaction.<\/p>\n<p class=\"dcr-130mj7b\">\u201cFirst, even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy. In healthcare contexts, this can be genuinely dangerous.<\/p>\n<p class=\"dcr-130mj7b\">\u201cSecond, the issue isn\u2019t just about AI limitations \u2013 it\u2019s about the human side of the equation. Users may not provide all necessary context or may ask the wrong questions by misobserving their symptoms.<\/p>\n<p class=\"dcr-130mj7b\">\u201cDisclaimers serve as a crucial intervention point. They disrupt this automatic trust and prompt users to engage more critically with the information they receive.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Gina Neff, a professor of responsible AI at Queen Mary University of London, said the \u201cproblem with bad AI Overviews is by design\u201d and Google was to blame. \u201cAI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous.\u201d<\/p>\n<p class=\"dcr-130mj7b\">In January, <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/02\/google-ai-overviews-risk-harm-misleading-health-information\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">a Guardian investigation<\/a> revealed people were being put at risk of harm by false and misleading health information in Google AI Overviews.<\/p>\n<p class=\"dcr-130mj7b\">Neff said <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/02\/google-ai-overviews-risk-harm-misleading-health-information\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">the investigation\u2019s findings<\/a> showed why prominent disclaimers were essential. \u201cGoogle makes people click through before they find any disclaimer,\u201d she said. \u201cPeople reading quickly may think the information they get from AI Overviews is better than what it is, but we know it can make serious mistakes.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Following the Guardian\u2019s reporting, Google <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/11\/google-ai-overviews-health-guardian-investigation\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">removed AI Overviews<\/a> for some but not all medical searches.<\/p>\n<p class=\"dcr-130mj7b\">Sonali Sharma, a researcher at Stanford University\u2019s centre for AI in medicine and imaging (AIMI), said: \u201cThe major issue is that these Google AI Overviews appear at the very top of the search page and often provide what feels like a complete answer to a user\u2019s question at a time where they are trying to access information and get an answer as quickly as possible.<\/p>\n<p class=\"dcr-130mj7b\">\u201cFor many people, because that single summary is there immediately, it basically <a href=\"https:\/\/www.theguardian.com\/technology\/ng-interactive\/2026\/jan\/24\/how-the-confident-authority-of-google-ai-overviews-is-putting-public-health-at-risk\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">creates a sense of reassurance<\/a> that discourages further searching, or scrolling through the full summary and clicking \u2018Show more\u2019 where a disclaimer might appear.<\/p>\n<p class=\"dcr-130mj7b\">\u201cWhat I think can lead to real-world harm is the fact that the AI Overviews can often contain partially correct and partially incorrect information, and it becomes very difficult to tell what is accurate or not, unless you are familiar with the subject matter already.\u201d<\/p>\n<p class=\"dcr-130mj7b\">A Google spokesperson said: \u201cIt\u2019s inaccurate to suggest that AI Overviews don\u2019t encourage people to seek professional medical advice. In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Tom Bishop, head of patient information at Anthony Nolan, a blood cancer charity, called for urgent action. \u201cWe know misinformation is a real problem, but when it comes to health misinformation, it\u2019s potentially really dangerous,\u201d said Bishop.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThat disclaimer needs to be much more prominent, just to make people step back and think \u2026 \u2018Is this something I need to check with my medical team rather than acting upon it? Can I take this at face value or do I really need to look into it in more detail and see how this information relates to my own specific medical situation?\u2019 Because that\u2019s the key here.\u201d<\/p>\n<p class=\"dcr-130mj7b\">He added: \u201cI\u2019d like this disclaimer to be right at the top. I\u2019d like it to be the first thing you see. And ideally it would be the same size font as everything else you\u2019re seeing there, not something that\u2019s small and easy to miss.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Google is putting people at risk of harm by downplaying safety warnings that its AI-generated medical advice may&hellip;\n","protected":false},"author":2,"featured_media":428422,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-428421","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/428421","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=428421"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/428421\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/428422"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=428421"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=428421"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=428421"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}