The DoseWhat should I know about asking ChatGPT for health advice?

Family physician Dr. Danielle Martin doesn’t mince words about artificial intelligence. 

“I don’t think patients should use ChatGPT for medical advice. Period,” said Martin, chair of the University of Toronto’s department of family and community medicine. 

Still, with roughly 6.5 million Canadians without a primary care provider, she acknowledges that physicians can’t stop patients from turning to chatbots powered by large language models (LLMs) for health answers. 

Martin isn’t alone in her concerns. Physician groups like the Ontario Medical Association and research from institutions like the Sunnybrook Health Science Centre all caution patients against relying on AI for medical advice. 

A 2025 study comparing 10 popular chatbots, including ChatGPT, DeepSeek and Claude, found “a strong bias in many widely used LLMs towards overgeneralizing scientific conclusions, posing a significant risk of large-scale misinterpretations of research findings.”

Martin and other experts believe most patients would be better served by using telehealth options available across Canada, such as dialling 811 in most provinces.

But she also told The Dose host Dr. Brian Goldman that if they do choose to use chatbots, they can help reduce the risk of harm by avoiding open-ended questions and restricting AI-generated answers to credible sources.

Learning to ask the right questions

Unlike traditional search engines that provide users with links to reputable sources to answer questions, chatbots like Gemini, Claude and ChatGPT generate their own answers to users’ questions, based on existing databases of information.

Martin says a key challenge is figuring out how much of an AI-generated answer to a medical question is or isn’t essential information.

If you ask a chatbot something like, “I have a red rash on my leg, what could it be?” you could be given a “dump of information” which can do more harm than good.

“My concern is that the average busy person isn’t going to be able to read and process all of that information,” she said.

Photo of Doctor Danielle Martin in an examining room at Women's College Hospital.Danielle Martin is a family physician and chair of the department of family and community medicine at the University of Toronto. (Craig Chivers/CBC)

What’s more, if a patient asks “What do I need to know about lupus?”, for example, they “probably don’t know enough yet about lupus to be able to screen out or recognize the stuff that actually doesn’t make sense,” said Martin.

Martin says patients are more often better-served by asking them for help finding reliable sources, like official government websites. 

Instead of asking, “Should I get this year’s flu shot?” a better question would be, “What are the most reliable websites to learn more about this year’s flu shot?”

Be careful following treatment advice

Martin says that patients shouldn’t rely on solutions recommended by AI — like purchasing topical creams for rashes — without consulting a medical expert. 

In the case of symptoms like rashes which may have many possible causes, Martin instead recommends speaking to a health-care worker and to not ask an AI at all. 

Some people might also worry that an AI chatbot might talk patients out of consulting real-life physicians, but family physician Dr. Onil Bhattacharry says it’s not as likely as some may fear.

“Generally the tools are … slightly risk-averse, so they might push you to more likely seek care than not,” said Bhattacharrya, director of Women’s College Hospital’s institute for health system solutions and virtual care. 

Bhattacharrya is interested in how technology can support clinical care, and says artificial intelligence could be a way to democratize access to medical expertise. 

He uses tools like OpenEvidence which compiles information from medical journals and gives answers that are accessible to most health professionals.

WATCH | How doctors are using AI in the exam room — and why it could become the norm: 

How doctors are using AI in the exam room — and why it could become the norm

The Quebec government says it’s launching a pilot project involving artificial intelligence transcription tools for health-care professionals, with an increasing number saying they cut down the time they spend filling paperwork.

Still, Bhattacharrya recognizes that it can be more challenging for patients to determine the reliability of medical advice from an AI.

“As a doctor, I can critically appraise that information,” but it isn’t always easy for patients to do the same, he said.

Bhattacharrya also said chatbots can suggest treatment options that are available in some countries but not Canada, since many of them draw from American medical literature.

Despite her hesitations, Martin acknowledges there are some things an AI can do better than human physicians — like recalling a long list of possible conditions associated with a symptom. 

“On a good day, we’re best at identifying the things that are common and the things that are dangerous,” she said. 

“I would imagine that if you were to ask the bot, ‘What are all of the possible causes of low platelets?’ or whatever, it would probably include six things on the list that I have forgotten about because I haven’t seen or heard about them since third year medical school.”

Can patients with chronic conditions benefit from AI?

For his part, Bhattacharrya also sees AI as a way to empower people to improve their health literacy. 

A chatbot can help patients with chronic conditions looking for general information in simple language, though he cautions against “exploring nonspecific symptoms and their implications.”

WATCH | People are turning to AI chatbots for emotional support: 

People are turning to AI chatbots for emotional support

Warning: Mention of suicide and self-harm. Millions of people, especially teens, are finding companionship and emotional support in using AI chatbots, according to a kids digital safety non-profit. But health and technology experts say artificial intelligence isn’t properly designed for these scenarios and could do more harm than good.

“In primary care we see a large number of people with nonspecific symptoms,” he said. 

“I have not tested this, but I suspect the chatbots are not great at saying ‘I don’t know what is causing this but let’s just monitor it and see what happens.’ That’s what we say as family doctors much of the time.”