The question facing Ontario’s health-care system is no longer whether artificial intelligence (AI) will play a role in care delivery. It already does – AI tools are being used in exam rooms, embedded in electronic medical records and accessed by patients directly, often before a physician is ever consulted.

The real question is whether AI will be integrated in a way that strengthens patient care, supports physicians and upholds the core values of medicine or whether it will be adopted haphazardly, driven by commercial interests and system pressures, leaving physicians reacting to technologies they did not design and do not govern.

AI holds enormous promise. When thoughtfully designed and carefully implemented, it has the potential to reduce administrative burden, improve clinical decision-making, enhance system planning and support patients in navigating an increasingly complex health-care landscape. Yet without clear, focused, physician-led and patient-centred governance, the AI we get may not be the AI we need.

AI is gaining momentum in Ontario in part because our health-care system is under extraordinary strain. Community-based and primary care are increasingly fragile. Patient volumes are rising, complexity is increasing, physicians are facing relentless administrative demands from forms, documentation and non-clinical tasks. These pressures contribute directly to burnout, early retirement and reduced access to care.

Against this backdrop, AI has presented itself as a compelling solution, promising efficiency, scalability, cost containment and convenient care. It offers the possibility of doing more with fewer human resources in a system already struggling to meet demand. Aggressive marketing, often directed not only at health-care organizations but also directly at patients, has boosted expectations. Many of these tools are being adopted quickly, often without independent evaluation or robust local validation.

Some early applications of AI have delivered tangible benefits. AI scribes, for example, have demonstrated the potential to reduce time spent charting, allowing physicians to focus more fully on the patient in front of them. Even modest time savings, when multiplied across thousands of clinical encounters, can translate into meaningful reductions in workload and burnout. Other jurisdictions are piloting AI-assisted prescription renewals and administrative triage, pushing beyond simple transcription toward more complex clinical-support tasks.

The integration of AI into electronic medical records (EMRs/HISs) raises additional possibilities. An embedded AI clinical decision support system (AI-CDS) that can surface relevant guidelines, summarize evidence or flag potential concerns during a patient encounter may improve efficiency and consistency of care. AI-enabled literature review and evidence synthesis platforms offer clinicians timely access to evolving medical literature, helping address a long-standing challenge in clinical practice.

At the population level, AI has the potential to support health system planning by identifying trends, anticipating demand and highlighting gaps in care. Used responsibly, these tools could improve resource allocation and help policymakers respond more effectively to population health needs.

These are real and meaningful opportunities. Ignoring them would be a mistake. But embracing them uncritically would be an even greater one.

Despite its promise, AI carries significant risks that cannot be ignored. Patient safety remains the most immediate concern. Large language model-based chat tools, now widely accessible to the public, can generate convincing but incorrect medical advice. There are documented cases of missed diagnoses, inappropriate medication recommendations and serious harm associated with overreliance on AI-generated health information.

Despite its promise, AI carries significant risks that cannot be ignored.

AI systems are prone to so-called “hallucinations,” producing outputs that sound authoritative but are factually wrong. In a clinical context, these errors can be dangerous. Even AI scribes, often viewed as low-risk tools, can introduce inaccuracies into the medical record, with downstream consequences for patient care and medicolegal risk.

As AI systems become more sophisticated, they also are becoming more “empathetic.” Chatbots can spend unlimited time with patients, responding patiently, validating concerns and mimicking the language of care. While this may feel supportive, these systems are trained within largely opaque “black box” models that inherently reflect biases including gender, racial and societal biases and operate without meaningful checks and balances or a human in the loop. This risk fosters a false sense of a therapeutic relationship, where patients may attribute understanding, accountability or clinical judgment to systems that possess none of these qualities. The potential harm may be greater than with human interaction, as the public often perceives AI as more accurate, objective and infallible than it truly is.

Increasingly, patients arrive at clinical encounters having already consulted AI tools and formed fixed expectations about diagnosis or treatment. Managing these expectations requires time, communication, and trust. When AI recommendations conflict with physician judgment, the potential for confusion and erosion of trust increases on both sides of the encounter.

Without clear guidance for patients and clinicians alike, AI risks complicating rather than simplifying care.

Perhaps the most profound risk posed by poorly governed AI is its potential to erode primary care, the foundation of Ontario’s health-care system. Family physicians provide far more than episodic diagnosis and treatment. Longitudinal primary care is built on continuity, context, relationship and, most importantly, trust. That trust is established over time through consistent presence, accountability for outcomes and a deep understanding of patients’ medical histories, family dynamics, social circumstances and unspoken cues with the shared expectation that every decision is grounded in the patient’s best interest.

A patient’s downward gaze, a hesitation in response or a subtle change in affect can signal something significant. These are not data points easily captured by algorithms. Family physicians often care for multiple members of the same family, integrating information across generations and contexts. This depth of understanding supports safer, more effective care.

AI chatbots, by contrast, see only snapshots. They lack continuity, relational memory and accountability. They cannot hold responsibility for outcomes, nor can they navigate the ethical and emotional complexity inherent in clinical care. Used as adjuncts, they may provide useful support. Used as substitutes, they risk fragmenting care further and undermining the very foundations of effective primary care.

In a system already struggling to recruit and retain family physicians, overreliance on AI as a replacement rather than a complement could accelerate decline rather than alleviate pressure.

Equity concerns also loom large in the AI adoption. AI systems are only as good as the data on which they are trained. If training datasets do not adequately represent Ontario’s diverse populations, including Indigenous communities and marginalized groups, algorithms may perform poorly or generate biased outputs.

Bias in AI is not merely theoretical. It can lead to misclassification, underdiagnosis or inappropriate recommendations for certain populations. Without rigorous local validation and ongoing monitoring, AI risks worsening existing inequities rather than reducing them.

Equity concerns extend to access. Rural and remote communities may face connectivity challenges that limit effective use of AI tools. Non-English speakers and individuals with limited digital literacy may be excluded from benefits that others enjoy. If AI becomes embedded in pathways of access to care, these gaps may widen the divide and cause even more inequity.

Ensuring equitable implementation requires intentional design, inclusive data practices and policy oversight. It cannot be left to market forces alone, especially since AI in health care depends on vast quantities of personal health information. How this data is collected, stored, used and shared has profound implications for trust in the health-care system.

Clear rules around data ownership are essential. Patients must understand how their information is being used, and clinicians must have confidence that data is handled responsibly. Transparency around commercial interests is critical. The sale or secondary use of health data for purposes unrelated to patient care undermines trust and raises serious ethical concerns.

Ontario’s existing privacy laws and institutional policies provide some protection, but they are fragmented and not designed with AI-specific risks in mind. As AI systems become more integrated into care delivery, governance must evolve accordingly. Strong security standards, accountability mechanisms and clear limits on commercial exploitation are non-negotiable if public trust is to be maintained.

Yet governance frameworks in Ontario have not kept pace. While professional accountability structures, privacy legislation and hospital policies exist, they are not cohesive, comprehensive or AI-specific. There is no unified vision for how AI should be used in health care, nor clear guidance for clinicians or patients.

This governance gap is not benign. In the absence of clear standards, decisions default to vendors, institutions under pressure or individual clinicians navigating risk alone. This fragmentation increases variability, exposes patients and physicians to harm and undermines professional autonomy.

What is needed is clear, physician-led governance that positions AI as a complement to medical practice, not a replacement. Physicians must work alongside patients, ethicists, data scientists, legal experts, and policymakers to develop standards for safe, ethical and effective AI use in healthcare.

AI literacy also is essential. Clinicians need training to understand the capabilities and limitations of these tools. Patients need education to use AI safely and appropriately. Without shared understanding, misunderstanding and misuse are inevitable.

And without clear legal and financial frameworks, who is responsible when AI fails? Vendors, institutions or physicians? Should physicians risk their professional judgment to comply with commercial pressures?

An emerging concern is the environmental impact of AI, not just for its energy use but for its effects on planetary health and human well-being. Training and running large AI models requires significant computational power, and as AI spreads in health care, these demands add up.

Responsible implementation must consider these environmental and health costs alongside clinical benefits. Climate-related health impacts can increase illness and downstream health-care costs. Energy-efficient algorithms, careful procurement and sustainability should guide AI adoption to ensure innovation does not create future health crises or undermine broader societal goals.

AI is not a passing trend. It is a transformational technology that is shaping the future of medicine. Used wisely, it can help address system pressures, reduce administrative burden and support high-quality care. Used poorly, it risks misdiagnosis, fragmentation, inequity and erosion of trust.

The outcome is not predetermined. It depends on governance.

Without firm, physician-led leadership, physicians risk being left behind, reacting to technologies imposed upon them rather than guiding their development and use. Medicine cannot afford a future in which clinical judgment, professional autonomy and patient relationships are secondary to efficiency metrics and commercial priorities.

Physicians must claim a central role in shaping how AI is integrated into health care. Doing so is not about resisting innovation. It is about ensuring that innovation serves patients, supports clinicians, and strengthens the profession.

AI is the future of healthcare. The real question is who will guide that future: will physicians lead, ensuring technology serves patients or will we be forced to adapt after the fact, risking the trust and relationships that form the foundation of all care? Choice and responsibility lie with us.