{"id":498056,"date":"2026-03-27T10:00:09","date_gmt":"2026-03-27T10:00:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/498056\/"},"modified":"2026-03-27T10:00:09","modified_gmt":"2026-03-27T10:00:09","slug":"what-everyone-should-know-before-asking-chatgpt-for-medical-advice-and-get-a-safe-answer","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/498056\/","title":{"rendered":"What everyone should know before asking ChatGPT for medical advice and get a safe answer"},"content":{"rendered":"<p>When Alexandra Watson has a question about her <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.independent.co.uk\/life-style\/health-and-families\/heart-health-disease-sleep-night-owl-b2909310.html\" title=\"heart \">heart<\/a> condition, her first port of call is Chad. That\u2019s not the name of her cardiologist \u2013 rather, it\u2019s her nickname for <a href=\"https:\/\/www.independent.co.uk\/topic\/chatgpt\" rel=\"nofollow noopener\" target=\"_blank\">ChatGPT<\/a>, which she has been using for the past couple of years to check her <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.independent.co.uk\/health-and-wellbeing\/colcon-cancer-symptoms-mel-schilling-b2944501.html\" title=\"symptoms \">symptoms<\/a>. <\/p>\n<p>Her condition is a rare one, and she says that the LLM (large language model) \u201ccuts through the noise\u201d to provide readable and easily understandable information. \u201cI couldn\u2019t get my cardiologist to spend this time talking me through every question I have on the subject,\u201d she says. \u201cBut using <a href=\"https:\/\/www.independent.co.uk\/topic\/ai\" rel=\"nofollow noopener\" target=\"_blank\">AI<\/a> \u201callows me to deep dive and talk hypothetically. Doctors are dismissive, Google just scares you, but Chad is helpful.\u201d<\/p>\n<p>In January, a <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.independent.co.uk\/tech\/chatgpt-ai-health-advice-b2894982.html\">report from OpenAI<\/a>, the tech giant behind ChatGPT, claimed that more than 40 million people around the world use the bot for health advice every single day, accounting for more than five per cent of messages sent to it globally. And, last year, <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.independent.co.uk\/news\/uk\/home-news\/nhs-healthwatch-england-england-government-instagram-b2858745.html\">research<\/a> from healthcare champion Healthwatch found that nine per cent of men and seven per cent of women across England are using AI chatbots for medical queries. <\/p>\n<p>For Watson, the fact that the chatbot can keep track of previous issues she has asked about, to give me a more comprehensive picture, is a bonus. It references her heart queries, for example, when she asks other health-related questions. <\/p>\n<p>She\u2019s aware, though, that \u201cChad\u201d can have a propensity to flatter; it\u2019s not necessarily one for tough love. \u201c[It] wants to make me feel good about myself,\u201d she says, noting that when she \u201casked about suitable diets the other day\u201d, it mentioned that she \u201cneeded to take it easy\u201d after an operation almost two years ago, and told her \u201cto be kind to myself\u201d during menopause. <\/p>\n<p>Carole Railton is another convert. \u201cI use ChatGPT most days with my work or for travel arrangements,\u201d she says. \u201cIt seemed natural to use it for the rest of my life, including medical information, too.\u201d Like Watson, she has a heart condition. Her regular check-ups, she says, sometimes seem like a tick sheet from the medical profession. So when she had some things going on with her body that she was not sure about, her first port of call was ChatGPT.<\/p>\n<p>The chatbot also proved useful when she was planning an international trip, directing her to get a \u201cfit to fly\u201d note in order to travel with her medication. Its cheerful tone makes all the difference, too. \u201cIf a human was as knowledgeable and as nice, I would make a beeline for them,\u201d she says. <\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/iStock-2176053749.jpg\"  loading=\"lazy\" alt=\"More than 40 million use ChatGPT for health advice each day\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>More than 40 million use ChatGPT for health advice each day (Getty\/iStock)<\/p>\n<p>Informative, convenient and surprisingly personable \u2013 it is perhaps unsurprising that so many of us are asking AI bots for health guidance. They might seem friendlier and less alarmist than \u201cDr Google\u201d \u2013 and can be easier to get hold of than your GP. But most of these programmes were not designed to dole out medical advice; their small print terms and conditions will tend to remind users of this. ChatGPT\u2019s guidelines, for example, state that it is \u201cnot intended for use in the diagnosis or treatment of any health condition\u201d. <\/p>\n<p>But when we\u2019re actually in the thick of a back-and-forth with a bot, it can be easy to forget this. A recent study from researchers at Stanford and Berkeley found that disclaimers and warnings in response to health questions notably decreased on LLMs between 2022 and 2025, dropping from 26.3 per cent to 0.97 per cent. <\/p>\n<p>Like all LLMs, chatbots are notoriously prone to errors and hallucinations, when they generate factually incorrect or misleading information by predicting a pattern. Last year, for example, an American medical journal reported the case of a 60-year-old man who started replacing salt in his diet with sodium bromide after consulting ChatGPT. He ended up in psychiatric care after suffering from paranoia and hallucinations, the result of his overexposure to bromides.<\/p>\n<p>Then there is the question of data privacy, an issue that many of us choose to ignore in favour of convenience in the moment. What happens to the health information we are sharing with Big Tech? And with all this in mind, should we be proceeding with far greater caution?<\/p>\n<p>We used to talk about \u2018Dr Google\u2019. This is a more conversational version, which makes it feel more like speaking to a real healthcare professional<\/p>\n<p> Dr Sonia Szamock<\/p>\n<p><a href=\"https:\/\/www.independent.co.uk\/topic\/openai\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> has, perhaps inevitably, framed its chatbot as an \u201cimportant ally\u201d in helping patients to \u201cself-advocate\u201d and navigate the healthcare system, especially in the United States, where the process can be complex and fragmented. In January, it rolled out ChatGPT Health for a limited group of users. This feature allows users to connect their health information, such as medical records or data from apps like Apple Health or MyFitnessPal, so that they can receive more personalised responses in their chats. <\/p>\n<p>At the time, the company said this latest development was designed to \u201csupport, not replace, medical care\u201d, and explained that health information would be stored separately from other chats. It\u2019s currently unavailable in the UK, the European Economic Area and Switzerland, however, due to tighter restrictions around digital privacy. <\/p>\n<p>Last month, a study published in the journal Nature Medicine tested the chatbot on 60 medical scenarios, changing various conditions such as the patient\u2019s gender or race, or adding test results and comments from family. The researchers found that while ChatGPT Health performed well in \u201ctextbook emergencies\u201d, where patients reported unmistakable symptoms, it floundered elsewhere. <\/p>\n<p>In 51.6 per cent of cases where the patient needed to immediately head to hospital, the chatbot advised them to stay at home or wait for a routine appointment. \u201cChatGPT Health is most reliable when the clinical decision is least consequential, and least reliable when it matters most,\u201d lead researcher Ashwin Ramaswamy told The BMJ. <\/p>\n<p>When The Independent contacted OpenAI, they told us that they welcome independent research around AI healthcare systems, but claimed that the study doesn\u2019t reflect how people typically tend to use ChatGPT Health, or how it is designed to work in real-life scenarios. They added that they are continuing to improve the safety and reliability of the programme through testing and feedback before rolling it out more broadly. <\/p>\n<p>Of course, the act of trying to access health-related information online is nothing new. Who among us can honestly say that they\u2019ve never trawled the web to learn more about some apparently minor symptom, only to steadily convince ourselves that said symptom is in fact some dreadful harbinger of doom? \u201cWe used to talk about \u2018Dr Google\u2019\u201d, says Dr Sonia Szamocki, a former NHS doctor who is now founder and CEO of AI healthtech company <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.32co.com\/\">32Co<\/a>. \u201cThis is a more conversational version, which makes it feel more like speaking to a real healthcare professional.\u201d <\/p>\n<p>\u201cWhat people are trying to solve is not a new problem, which is that it\u2019s hard to get access to doctors,\u201d says Szamocki. \u201cWaiting lists are high, and that\u2019s if you want to just get to a GP.\u201d It is even harder to get more specialist knowledge, she notes. \u201cThat\u2019s because there are even more obstacles in the way. So it\u2019s completely natural that people go online to try and get the information that they\u2019re struggling to get.\u201d<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2026\/03\/iStock-2254694851.jpg\"  loading=\"lazy\" alt=\"An AI doctor giving a diagnosis on a smartphone\" class=\"sc-1mc30lb-0 ggpMaE inline-gallery-btn\"\/>An AI doctor giving a diagnosis on a smartphone (Getty\/iStock)<\/p>\n<p>Consulting an LLM is not the same as looking up an answer in a book, or even searching Google, which is essentially \u201cpulling a fact out and presenting that to you on a plate\u201d, Szamocki says. Instead, LLMs are \u201cpattern recognisers\u201d, she explains. \u201cThey are probabilistic mechanisms to find the most likely answer to a question [which have learned from billions of texts to] try to predict what\u2019s the next best word in a series of words.\u201d<\/p>\n<p>And, crucially, \u201cyou can\u2019t be 100 per cent sure if you ask it something, that it will retrieve exactly the right fact\u201d. That, Szamocki adds, is \u201creally where the worry comes from\u201d. <\/p>\n<p>Plus, an LLM will tend to try and be extremely helpful even when it doesn\u2019t actually 100 per cent know the answer. She says, these platforms have a habit of prioritising helpfulness over, say, accuracy. Hallucinations, Szamocki says, can occur \u201cwhere [a LLM] is trying to fill a gap in knowledge but saying \u2018look, it\u2019s probably this\u2019\u201d. <\/p>\n<p>The way your prompt is written can impact the response you receive. When you send a message or question to a chatbot, you\u2019re putting in what you think is important, notes Dr Caroline Pilot, acting chief medical officer for digital clinic <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.healthhero.com\/\">HealthHero.<\/a> \u201cSo the prompt is biased in the first place\u201d. Also, you might inadvertently leave out key information that a doctor would ask you about. \u201cWhen I\u2019m consulting with someone, I let them tell me what they think is important,\u201d she explains. But she is also wondering: \u201cOK, but did they have this other thing that they didn\u2019t mention?\u201d <\/p>\n<p>To work around all this, chatbot fan Alexandra Watson says she always asks for sources and requests a cross-check when she presents ChatGPT with a medical question. <\/p>\n<p>Are doctors concerned about how \u201cDr ChatGPT\u201d might be changing the way their patients are seeking medical advice? \u201cI know lots of clinicians mind, but I really don\u2019t mind if people have done their homework and asked a chatbot,\u201d Dr Pilot says. \u201cI find it interesting to have the conversation and explore their fears and concerns, and what the chatbot said.\u201d <\/p>\n<p>But it can depend on the patient, she says. If someone has a fixed idea as to what their problem might be, they might already be scared of whatever it was that the internet said it was. <\/p>\n<p>Professor Victoria Tzortziou-Brown is chair of the Royal College of General Practitioners. \u201cIt\u2019s encouraging to see patients being curious about their health,\u201d she says. But she cautions that chatbots are not without risks. \u201cIt\u2019s not always clear where the information is being drawn from or how accurate it is,\u201d she says, adding that the results could therefore contain content that is neither evidence-based nor trustworthy.<\/p>\n<p>Even the most reputable AI providers rarely allow users to choose how long their health-related data is retained<\/p>\n<p>Dr Aaisha Makkar<\/p>\n<p>There is \u201chuge potential\u201d for technology to support patients, she adds. \u201cBut this will always need to work alongside and complement the work of doctors and other healthcare professionals.\u201d<\/p>\n<p>And it is important to bear in mind that handing over our health information to LLMs can introduce significant data privacy risks. Dr Aaisha Makkar, a lecturer in computer science at the <a rel=\"nofollow noopener\" target=\"_blank\" href=\"https:\/\/www.derby.ac.uk\/\">University of Derby<\/a>, specialises in ethical privacy-preserving technologies. \u201cMany AI systems store user input in cloud environments, where models may iteratively learn from the data,\u201d she says. But this process is not always guaranteed to follow strict anonymisation standards. <\/p>\n<p>Plus, sometimes LLMs can \u201cinfer or reconstruct sensitive personal details from underlying patterns in the data\u201d, even if users have tried to steer away from obvious identifiers. Most of us, Makkar notes, will have little idea about how our data is processed behind the scenes. \u201cEven the most reputable AI providers rarely allow users to choose how long their health-related data is retained.\u201d<\/p>\n<p>She advises, therefore, to turn to chatbots \u201conly for general medical guidance, rather than for personalised medical advice that requires sharing detailed health information\u201d. <\/p>\n<p>Pilot, meanwhile, is asked \u201call the time\u201d about whether AI will replace doctors. \u201cI don\u2019t see that it will replace them,\u201d she says. \u201cI think that it will aid them, and that they will use it as a consulting tool.\u201d <\/p>\n<p>And however friendly and eager to please it might seem, an AI chatbot cannot replace a conversation with a clinician who knows the patient, understands the context, and can make safe, evidence-based decisions, says Tzortziou-Brown.<\/p>\n","protected":false},"excerpt":{"rendered":"When Alexandra Watson has a question about her heart condition, her first port of call is Chad. That\u2019s&hellip;\n","protected":false},"author":2,"featured_media":498057,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-498056","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/498056","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=498056"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/498056\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/498057"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=498056"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=498056"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=498056"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}