{"id":232433,"date":"2026-01-11T10:29:08","date_gmt":"2026-01-11T10:29:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/232433\/"},"modified":"2026-01-11T10:29:08","modified_gmt":"2026-01-11T10:29:08","slug":"dangerous-and-alarming-google-removes-some-of-its-ai-summaries-after-users-health-put-at-risk-google","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/232433\/","title":{"rendered":"\u2018Dangerous and alarming\u2019: Google removes some of its AI summaries after users\u2019 health put at risk | Google"},"content":{"rendered":"<p class=\"dcr-130mj7b\">Google has removed some of its artificial intelligence health summaries after <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/02\/google-ai-overviews-risk-harm-misleading-health-information\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">a Guardian investigation<\/a> found people were being put at risk of harm by false and misleading information.<\/p>\n<p class=\"dcr-130mj7b\">The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are \u201c<a href=\"https:\/\/blog.google\/products\/search\/generative-ai-google-search-may-2024\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">helpful<\/a>\u201d and \u201c<a href=\"https:\/\/search.google\/intl\/en-GB\/ways-to-search\/ai-overviews\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">reliable<\/a>\u201d.<\/p>\n<p class=\"dcr-130mj7b\">But some of the summaries, which appear at the top of search results, served up inaccurate health information, putting users at risk of harm.<\/p>\n<p class=\"dcr-130mj7b\">In one case that experts described as \u201cdangerous\u201d and \u201calarming\u201d, <a href=\"https:\/\/www.theguardian.com\/technology\/google\" data-link-name=\"in body link\" data-component=\"auto-linked-tag\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a> provided bogus information about crucial liver function tests that could leave people with serious liver disease wrongly thinking they were healthy.<\/p>\n<p class=\"dcr-130mj7b\">Typing \u201cwhat is the normal range for liver blood tests\u201d served up masses of numbers, little context and no accounting for nationality, sex, ethnicity or age of patients, the Guardian found.<\/p>\n<p class=\"dcr-130mj7b\">What Google\u2019s AI Overviews said was normal may vary drastically from what was actually considered normal, experts said. The summaries could lead to seriously ill patients wrongly thinking they had a normal test result, and not bother to attend follow-up healthcare meetings.<\/p>\n<p class=\"dcr-130mj7b\"><a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/02\/google-ai-overviews-risk-harm-misleading-health-information\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">After the investigation<\/a>, the company has removed AI Overviews for the search terms \u201cwhat is the normal range for liver blood tests\u201d and \u201cwhat is the normal range for liver function tests\u201d.<\/p>\n<p class=\"dcr-130mj7b\">A Google spokesperson said: \u201cWe do not comment on individual removals within Search. In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Vanessa Hebditch, the director of communications and policy at the British Liver Trust, a liver health charity, said: \u201cThis is excellent news, and we\u2019re pleased to see the removal of the Google AI Overviews in these instances.<\/p>\n<p class=\"dcr-130mj7b\">\u201cHowever, if the question is asked in a different way, a potentially misleading AI Overview may still be given and we remain concerned other AI\u2011produced health information can be inaccurate and confusing.\u201d<\/p>\n<p class=\"dcr-130mj7b\">The Guardian found that typing slight variations of the original queries into Google, such as \u201clft reference range\u201d or \u201clft test reference range\u201d, prompted AI Overviews. That was a big worry, Hebditch said.<\/p>\n<p class=\"dcr-130mj7b\">\u201cA liver function test or LFT is a collection of different blood tests. Understanding the results and what to do next is complex and involves a lot more than comparing a set of numbers.<\/p>\n<p class=\"dcr-130mj7b\">\u201cBut the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.<\/p>\n<p class=\"dcr-130mj7b\">\u201cIn addition, the AI Overviews fail to warn that someone can get normal results for these tests when they have serious liver disease and need further medical care. This false reassurance could be very harmful.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Google, <a href=\"https:\/\/gs.statcounter.com\/search-engine-market-share\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">which has a 91% share of the global search engine market<\/a>, said it was reviewing the new examples provided to it by the Guardian.<\/p>\n<p class=\"dcr-130mj7b\">Hebditch said: \u201cOur bigger concern with all this is that it is nit-picking a single search result and Google can just shut off the AI Overviews for that but it\u2019s not tackling the bigger issue of AI Overviews for health.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Sue Farrington, the chair of the Patient Information Forum, which promotes evidence-based health information to patients, the public and healthcare professionals, welcomed the removal of the summaries but said she still had concerns.<\/p>\n<p class=\"dcr-130mj7b\">\u201cThis is a good result but it is only the very first step in what is needed to maintain trust in Google\u2019s health-related search results. There are still too many examples out there of Google AI Overviews giving people inaccurate health information.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Millions of adults worldwide already struggle to access trusted health information, Farrington said. \u201cThat\u2019s why it is so important that Google signposts people to robust, researched health information and offers of care from trusted health organisations.\u201d<\/p>\n<p class=\"dcr-130mj7b\">AI Overviews still pop up for <a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/02\/google-ai-overviews-risk-harm-misleading-health-information\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">other examples the Guardian originally highlighted to Google<\/a>. They include summaries of information about cancer and mental health that experts described as \u201ccompletely wrong\u201d and \u201creally dangerous\u201d.<\/p>\n<p class=\"dcr-130mj7b\">Asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources, and informed people when it was important to seek out expert advice.<\/p>\n<p class=\"dcr-130mj7b\">A spokesperson said: \u201cOur internal team of clinicians reviewed what\u2019s been shared with us and found that in many instances, the information was not inaccurate and was also supported by high quality websites.\u201d<\/p>\n<p class=\"dcr-130mj7b\">Victor Tangermann, a senior editor at the technology website Futurism, said the results of the Guardian\u2019s investigation showed Google had work to do \u201c<a href=\"https:\/\/futurism.com\/artificial-intelligence\/google-ai-overviews-dangerous-health-advice\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">to ensure that its AI tool isn\u2019t dispensing dangerous health misinformation<\/a>\u201d.<\/p>\n<p>Quick GuideContact Andrew Gregory about this storyShow<img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/01\/1768127348_560_4000.jpg\" alt=\"\" class=\"dcr-1vs4o7z\"\/><\/p>\n<p>If you have something to share about  this story, you can contact Andrew using one of the following methods.<\/p>\n<p>The Guardian app has a tool to send tips about stories. Messages are end to end encrypted and concealed within the routine activity that every Guardian mobile app performs. This prevents an observer from knowing that you are communicating with us at all, let alone what is being said.<\/p>\n<p>If you don\u2019t already have the Guardian app, download it (<a href=\"https:\/\/apps.apple.com\/app\/the-guardian-live-world-news\/id409128287\" rel=\"nofollow noopener\" target=\"_blank\">iOS<\/a>\/<a href=\"https:\/\/play.google.com\/store\/apps\/details?id=com.guardian\" rel=\"nofollow noopener\" target=\"_blank\">Android<\/a>) and go to the menu. Select \u2018Secure Messaging\u2019.<\/p>\n<p>Email (not secure)<\/p>\n<p>If you don\u2019t need a high level of security or confidentiality you can email\u00a0<a href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/11\/mailto:andrew.gregory@theguardian.com\" rel=\"nofollow noopener\" target=\"_blank\">andrew.gregory@theguardian.com<\/a><\/p>\n<p>SecureDrop and other secure methods<\/p>\n<p>If you can safely use the tor network without being observed or monitored you can send messages and documents to the Guardian via our <a href=\"https:\/\/www.theguardian.com\/securedrop\" rel=\"nofollow noopener\" target=\"_blank\">SecureDrop platform<\/a>.<\/p>\n<p>Finally, our guide at <a href=\"https:\/\/www.theguardian.com\/tips\" rel=\"nofollow noopener\" target=\"_blank\">theguardian.com\/tips<\/a>\u00a0lists several ways to contact us securely, and discusses the pros and cons of each.\u00a0<\/p>\n<p>Illustration: Guardian Design \/ Rich Cousins<\/p>\n<p>Thank you for your feedback.<\/p>\n<p class=\"dcr-130mj7b\">Google said AI Overviews only show up on queries where it has high confidence in the quality of the responses. The company constantly measures and reviews the quality of its summaries across many different categories of information, it added.<\/p>\n<p class=\"dcr-130mj7b\">In an article for <a href=\"https:\/\/www.searchenginejournal.com\/the-guardian-google-ai-overviews-gave-misleading-health-advice\/564476\/\" data-link-name=\"in body link\" rel=\"nofollow noopener\" target=\"_blank\">Search Engine Journal<\/a>, senior writer Matt Southern said: \u201cAI Overviews appear above ranked results. When the topic is health, errors carry more weight.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being&hellip;\n","protected":false},"author":2,"featured_media":232434,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-232433","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/232433","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=232433"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/232433\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/232434"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=232433"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=232433"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=232433"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}