When Alexandra Watson has a question about her heart condition, her first port of call is Chad. That’s not the name of her cardiologist – rather, it’s her nickname for ChatGPT, which she has been using for the past couple of years to check her symptoms.
Her condition is a rare one, and she says that the LLM (large language model) “cuts through the noise” to provide readable and easily understandable information. “I couldn’t get my cardiologist to spend this time talking me through every question I have on the subject,” she says. “But using AI “allows me to deep dive and talk hypothetically. Doctors are dismissive, Google just scares you, but Chad is helpful.”
In January, a report from OpenAI, the tech giant behind ChatGPT, claimed that more than 40 million people around the world use the bot for health advice every single day, accounting for more than five per cent of messages sent to it globally. And, last year, research from healthcare champion Healthwatch found that nine per cent of men and seven per cent of women across England are using AI chatbots for medical queries.
For Watson, the fact that the chatbot can keep track of previous issues she has asked about, to give me a more comprehensive picture, is a bonus. It references her heart queries, for example, when she asks other health-related questions.
She’s aware, though, that “Chad” can have a propensity to flatter; it’s not necessarily one for tough love. “[It] wants to make me feel good about myself,” she says, noting that when she “asked about suitable diets the other day”, it mentioned that she “needed to take it easy” after an operation almost two years ago, and told her “to be kind to myself” during menopause.
Carole Railton is another convert. “I use ChatGPT most days with my work or for travel arrangements,” she says. “It seemed natural to use it for the rest of my life, including medical information, too.” Like Watson, she has a heart condition. Her regular check-ups, she says, sometimes seem like a tick sheet from the medical profession. So when she had some things going on with her body that she was not sure about, her first port of call was ChatGPT.
The chatbot also proved useful when she was planning an international trip, directing her to get a “fit to fly” note in order to travel with her medication. Its cheerful tone makes all the difference, too. “If a human was as knowledgeable and as nice, I would make a beeline for them,” she says.
More than 40 million use ChatGPT for health advice each day (Getty/iStock)
Informative, convenient and surprisingly personable – it is perhaps unsurprising that so many of us are asking AI bots for health guidance. They might seem friendlier and less alarmist than “Dr Google” – and can be easier to get hold of than your GP. But most of these programmes were not designed to dole out medical advice; their small print terms and conditions will tend to remind users of this. ChatGPT’s guidelines, for example, state that it is “not intended for use in the diagnosis or treatment of any health condition”.
But when we’re actually in the thick of a back-and-forth with a bot, it can be easy to forget this. A recent study from researchers at Stanford and Berkeley found that disclaimers and warnings in response to health questions notably decreased on LLMs between 2022 and 2025, dropping from 26.3 per cent to 0.97 per cent.
Like all LLMs, chatbots are notoriously prone to errors and hallucinations, when they generate factually incorrect or misleading information by predicting a pattern. Last year, for example, an American medical journal reported the case of a 60-year-old man who started replacing salt in his diet with sodium bromide after consulting ChatGPT. He ended up in psychiatric care after suffering from paranoia and hallucinations, the result of his overexposure to bromides.
Then there is the question of data privacy, an issue that many of us choose to ignore in favour of convenience in the moment. What happens to the health information we are sharing with Big Tech? And with all this in mind, should we be proceeding with far greater caution?
We used to talk about ‘Dr Google’. This is a more conversational version, which makes it feel more like speaking to a real healthcare professional
Dr Sonia Szamock
OpenAI has, perhaps inevitably, framed its chatbot as an “important ally” in helping patients to “self-advocate” and navigate the healthcare system, especially in the United States, where the process can be complex and fragmented. In January, it rolled out ChatGPT Health for a limited group of users. This feature allows users to connect their health information, such as medical records or data from apps like Apple Health or MyFitnessPal, so that they can receive more personalised responses in their chats.
At the time, the company said this latest development was designed to “support, not replace, medical care”, and explained that health information would be stored separately from other chats. It’s currently unavailable in the UK, the European Economic Area and Switzerland, however, due to tighter restrictions around digital privacy.
Last month, a study published in the journal Nature Medicine tested the chatbot on 60 medical scenarios, changing various conditions such as the patient’s gender or race, or adding test results and comments from family. The researchers found that while ChatGPT Health performed well in “textbook emergencies”, where patients reported unmistakable symptoms, it floundered elsewhere.
In 51.6 per cent of cases where the patient needed to immediately head to hospital, the chatbot advised them to stay at home or wait for a routine appointment. “ChatGPT Health is most reliable when the clinical decision is least consequential, and least reliable when it matters most,” lead researcher Ashwin Ramaswamy told The BMJ.
When The Independent contacted OpenAI, they told us that they welcome independent research around AI healthcare systems, but claimed that the study doesn’t reflect how people typically tend to use ChatGPT Health, or how it is designed to work in real-life scenarios. They added that they are continuing to improve the safety and reliability of the programme through testing and feedback before rolling it out more broadly.
Of course, the act of trying to access health-related information online is nothing new. Who among us can honestly say that they’ve never trawled the web to learn more about some apparently minor symptom, only to steadily convince ourselves that said symptom is in fact some dreadful harbinger of doom? “We used to talk about ‘Dr Google’”, says Dr Sonia Szamocki, a former NHS doctor who is now founder and CEO of AI healthtech company 32Co. “This is a more conversational version, which makes it feel more like speaking to a real healthcare professional.”
“What people are trying to solve is not a new problem, which is that it’s hard to get access to doctors,” says Szamocki. “Waiting lists are high, and that’s if you want to just get to a GP.” It is even harder to get more specialist knowledge, she notes. “That’s because there are even more obstacles in the way. So it’s completely natural that people go online to try and get the information that they’re struggling to get.”
An AI doctor giving a diagnosis on a smartphone (Getty/iStock)
Consulting an LLM is not the same as looking up an answer in a book, or even searching Google, which is essentially “pulling a fact out and presenting that to you on a plate”, Szamocki says. Instead, LLMs are “pattern recognisers”, she explains. “They are probabilistic mechanisms to find the most likely answer to a question [which have learned from billions of texts to] try to predict what’s the next best word in a series of words.”
And, crucially, “you can’t be 100 per cent sure if you ask it something, that it will retrieve exactly the right fact”. That, Szamocki adds, is “really where the worry comes from”.
Plus, an LLM will tend to try and be extremely helpful even when it doesn’t actually 100 per cent know the answer. She says, these platforms have a habit of prioritising helpfulness over, say, accuracy. Hallucinations, Szamocki says, can occur “where [a LLM] is trying to fill a gap in knowledge but saying ‘look, it’s probably this’”.
The way your prompt is written can impact the response you receive. When you send a message or question to a chatbot, you’re putting in what you think is important, notes Dr Caroline Pilot, acting chief medical officer for digital clinic HealthHero. “So the prompt is biased in the first place”. Also, you might inadvertently leave out key information that a doctor would ask you about. “When I’m consulting with someone, I let them tell me what they think is important,” she explains. But she is also wondering: “OK, but did they have this other thing that they didn’t mention?”
To work around all this, chatbot fan Alexandra Watson says she always asks for sources and requests a cross-check when she presents ChatGPT with a medical question.
Are doctors concerned about how “Dr ChatGPT” might be changing the way their patients are seeking medical advice? “I know lots of clinicians mind, but I really don’t mind if people have done their homework and asked a chatbot,” Dr Pilot says. “I find it interesting to have the conversation and explore their fears and concerns, and what the chatbot said.”
But it can depend on the patient, she says. If someone has a fixed idea as to what their problem might be, they might already be scared of whatever it was that the internet said it was.
Professor Victoria Tzortziou-Brown is chair of the Royal College of General Practitioners. “It’s encouraging to see patients being curious about their health,” she says. But she cautions that chatbots are not without risks. “It’s not always clear where the information is being drawn from or how accurate it is,” she says, adding that the results could therefore contain content that is neither evidence-based nor trustworthy.
Even the most reputable AI providers rarely allow users to choose how long their health-related data is retained
Dr Aaisha Makkar
There is “huge potential” for technology to support patients, she adds. “But this will always need to work alongside and complement the work of doctors and other healthcare professionals.”
And it is important to bear in mind that handing over our health information to LLMs can introduce significant data privacy risks. Dr Aaisha Makkar, a lecturer in computer science at the University of Derby, specialises in ethical privacy-preserving technologies. “Many AI systems store user input in cloud environments, where models may iteratively learn from the data,” she says. But this process is not always guaranteed to follow strict anonymisation standards.
Plus, sometimes LLMs can “infer or reconstruct sensitive personal details from underlying patterns in the data”, even if users have tried to steer away from obvious identifiers. Most of us, Makkar notes, will have little idea about how our data is processed behind the scenes. “Even the most reputable AI providers rarely allow users to choose how long their health-related data is retained.”
She advises, therefore, to turn to chatbots “only for general medical guidance, rather than for personalised medical advice that requires sharing detailed health information”.
Pilot, meanwhile, is asked “all the time” about whether AI will replace doctors. “I don’t see that it will replace them,” she says. “I think that it will aid them, and that they will use it as a consulting tool.”
And however friendly and eager to please it might seem, an AI chatbot cannot replace a conversation with a clinician who knows the patient, understands the context, and can make safe, evidence-based decisions, says Tzortziou-Brown.