Eighty per cent of respondents to the CMA survey said they go online for health information because it provides the quickest path to finding answers.Borja Suarez/Reuters
Roughly half of Canadians are now turning to artificial intelligence for health information – and those who do are five times more likely to report experiencing harms to their health compared to those who don’t, according to a new survey from the Canadian Medical Association.
On Tuesday, the CMA released its 2026 Health and Media Tracking Survey, which found that nearly all Canadians are searching the web for health information and the majority – 64 per cent – are encountering false or misleading content.
Eighty per cent of respondents said they go online for health information because it provides the quickest path to finding answers. But 57 per cent said they only turned to the Internet when they couldn’t access a family doctor or other health care professional.
And while Canadians are increasingly using AI platforms such as ChatGPT or Google AI summaries, only 27 per cent actually trust AI to give accurate health information.
“What is most disturbing is that they are cautious about AI, they don’t trust it … but they’re using it,” said CMA president Margot Burnell, a medical oncologist in New Brunswick and associate professor of medicine at Dalhousie University. “I was surprised by that.”
“I think it’s because they don’t have access to care,” she continued. “And so if you don’t have ready access … this is where you go.”
The survey was conducted by Abacus Data and polled 5,000 Canadians online from Nov. 3 to 13, 2025. It has a margin of error of plus or minus 1.38 per cent, 19 times out of 20.
This is now the third iteration of the CMA’s annual health and media survey and it painted a worrying portrait of a health information ecosystem where trust continues to erode.
Most Canadians – 77 per cent – say they’re worried about health misinformation flowing out of the United States, and this is causing a spillover effect of increased skepticism toward all health information – even from reputable sources.
When it comes to getting accurate health information, Canadians are less trusting of news organizations and provincial public health agencies. Forty per cent are neutral, skeptical or downright distrustful of scientific studies.
Meanwhile, more Canadians looking for reliable health care guidance are turning to family members. But most still “fully trust” or “generally believe” health care practitioners – such as their family doctor, pharmacist or nurse practitioner – as experts who can help them navigate the deluge of online information.
The CMA’s latest survey results add to growing concerns over the potential harms of AI’s expanding influence as a source of medical or health care system information. Fifty-two per cent of respondents reported using AI search results for health information and 48 per cent used them for treatment advice.
Last month, an investigation by the Guardian found that Google’s AI Overviews were providing inaccurate information, some of which experts described as “dangerous” or “alarming.” In one case, Google’s AI summaries were advising people with pancreatic cancer to avoid high-fat foods; experts told the Guardian that this advice was actually the opposite of what should be recommended, and increased patients’ risk of dying.
(In Google’s response to the Guardian piece, it said the vast majority of its AI Overviews were factual and helpful, and it was continuously making improvements).
Amrit Kirpalani, a pediatric nephrologist in London, Ont., has noticed that more and more patients are coming to him with information gleaned from AI platforms.
“People have said, ‘Hey, I typed the diagnosis into ChatGPT and it told me that me or my child could also have a problem with this … or maybe I should be on this drug,’ ” he said. “I’ve definitely seen it cause a lot of anxiety.”
In 2024, Dr. Kirpalani published a paper in the journal PLOS One that demonstrated the potential pitfalls of using ChatGPT for diagnosing medical issues. He and his co-authors fed the chatbot with 150 medical cases used to test the diagnostic accuracy of health care practitioners.
ChatGPT only correctly diagnosed half of the cases. But the chatbot was excellent at spitting out clear and convincing medical information – even when it got the diagnosis completely wrong.
Dr. Kirpalani, an associate professor with Western University’s Schulich School of Medicine and Dentistry, noted that research has shown that marginalized patients are more likely to resort to the Internet when they need help with their health or medical issues.
He stressed the need for better access to health care for all Canadians. And as a physician, he says that AI is now a part of the routine discussions he’s having with his patients.
“I know people are using these tools,” Dr. Kipralani said. “I’d rather have that open discussion than say, ‘Oh don’t use that’ – and they use it anyway.”