OpenAI launched ChatGPT Health in January and claims that more than 230 million people solicit health-related advice every week.Kiichiro Sato/The Associated Press
Angela Dong and Blair Bigham are practising physicians and journalists at the Dalla Lana School of Public Health.
It is illegal to practise medicine without a licence in Canada. Ontario’s Regulated Health Professions Act limits “controlled acts,” such as performing surgery, prescribing medicine, and communicating diagnoses to licensed health professionals because serious harm can come to patients when errors are made.
Yet AI large language models (LLMs), such as ChatGPT and Llama-3 (Meta), are now routinely crossing the line from providing health information to relaying a medical diagnosis.
In today’s digital world, where health care access is hard to come by, Canadians are increasingly following advice from LLMs that could lead them astray instead of visiting doctor’s offices. According to the Canadian Medical Association’s Misinformation Susceptibility Index, one in three Canadians say they followed online advice instead of professional advice, while nearly one-quarter of Canadians report a negative consequence of having followed online health advice.
OpenAI has shown it cannot be trusted. Canada needs nationalized, public AI
Artificial intelligence companies claim they are not practising medicine. ChatGPT Health, a dedicated health AI chat program, claims its purpose is to “support, not replace, medical care” and is “not intended for diagnosis or treatment.”
But these disclaimers don’t stand up to scrutiny. Ontario regulations say “communicating a diagnosis” occurs when it is “reasonably foreseeable” that the person will rely and act upon the diagnosis. In other words, users, not tech companies, determine if a diagnosis was conveyed.
The risks are not hypothetical, as illustrated in a case published in the Annals of Internal Medicine. A 60-year-old man asked ChatGPT how to reduce his salt intake. ChatGPT advised him to swap sodium chloride – table salt – for sodium bromide, which poisoned him and resulted in a three-week hospital stay for bromide toxicity (a rare condition nowadays, since bromide in food has been largely eliminated).
It’s unfair to dismiss the man as foolish or gullible. LLMs are engineered to be persuasive. They simulate an ideal physician encounter by generating text that is authoritative, personalized and compassionate. Users, on the other hand, may be vulnerable, scared and less medically literate. Studies show they accept the medical advice of AI applications, with or without disclaimers.
A study published recently in Nature found that ChatGPT played down the seriousness of emergencies in 52 per cent of cases, further demonstrating the risks AI pose to our health.
For years, AI-powered chatbots built on OpenAI’s models have marketed themselves as “medical diagnosis assistants,” with one popular bot on the ChatGPT platform even promising to “provide basic diagnoses” to users.
Opinion: AI doesn’t make it better
In January, OpenAI launched ChatGPT Health, claiming that more than 230 million people solicit health-related advice every week.
The optimized design of LLMs intend for ordinary users to treat answers to health queries as reliable medical advice. They are functionally performing a controlled act without a medical licence in an unregulated, unchecked digital ecosystem.
This is illegal. In Ontario, where we practise, breaking the law comes with fines of up to $50,000, jail time and even criminal charges such as aggravated assault. Yet regulators have failed to step in, leaving technology companies free to enjoy the benefits of clinical authority without the requisite oversights that come with a duty of care.
Canadian courts have yet to set landmark precedents on whether LLM companies can be held liable when they practise controlled acts usually reserved for licensed medical professionals. Policy makers want to encourage AI investment. If harm ensues in the absence of strict liability, injured patients are left to fight complex product liability lawsuits where it is an uphill battle to prove negligence, defects or causation.
Legal cases are emerging. A British Columbia tribunal found Air Canada liable for erroneous information given to a consumer through an AI chatbot, establishing that a company cannot evade responsibility for AI outputs.
AI Minister tells OpenAI Canadian experts must assess flagged ChatGPT conversations
Urgent choices must be made to protect Canadians from the harms AI chatbots can confer when they lead people astray. Regulators could enact guardrails that constrain chatbots from opining on diagnoses and treatments, but this risks making such technology all but unusable.
We prefer holding LLMs accountable in a similar way health professionals are held accountable: through a licensing regime. Postmarket auditing, mandatory harm reporting, public complaint processes and liability parameters can work together to keep Canadians safe and provide a route to justice when AI errs.
AI may hold the key to a healthier future. But only if it practises medicine like every physician – while being held accountable.