A quiet paradox is unfolding in health care. Around the world, governments are centralizing health systems—consolidating hospitals, budgets, data and decision-making—at the very moment when the most transformative technology of our era, artificial intelligence (AI), depends on decentralization to work best. It is a collision between two incompatible logics: the bureaucratic drive for uniformity and the network logic of AI, which thrives on diversity, constant feedback, and local adaptation. And the outcome of this clash may define the future of modern medicine.
Tom Wheeler, the former FCC chair turned historian, has argued that every major communication revolution—from Gutenberg’s printing press to the internet—reshaped how society organizes itself. AI, he says, is the next great network revolution. But AI goes further: Unlike earlier technologies that merely transmitted information, AI systems learn from it. Their strength lies in context. An AI model that detects cancer or predicts readmissions improves only when diverse, real-world data flow through it and when clinicians closest to patients can adapt these tools to the needs of their communities.
This makes centralized systems particularly vulnerable to what might be called an “intelligence bottleneck.” Data may move upward into large provincial or national bureaucracies, but insight rarely flows back down to the front lines where care actually happens. Local innovation stalls. Learning slows. AI’s value collapses without the ability to act on new insights quickly. A system designed to minimize variation cannot benefit from a technology that depends on it.
The real promise of AI is not just prediction—it is continuous improvement. AI is the engine that can power true Learning Health Systems, where data continuously informs practice and practice continuously refines policy. When AI is embedded close to care, frontline teams can observe emerging patterns in real time, test new approaches and feed those lessons back into the broader system. Hospitals, clinics, rehabilitation centers and community programs become nodes in a living learning network that grows more intelligent with each interaction.

But this only works when those closest to patients have the autonomy and agility to act on what the data show. Hyper-centralized systems, by design, slow that responsiveness to a crawl and risk turning AI into a static tool rather than a dynamic learning partner capable of driving better outcomes, reducing inequities and strengthening system resilience.
Canada provides a useful case study because each province acts as its own health system. Quebec has gone furthest toward centralization. The recent creation of Santé Québec collapsed regional governance into a single mega-agency—one of the most extreme consolidations of health authority in the Western world. The intent is uniformity, but the risk is the suppression of local intelligence and clinical leadership. Ontario, while structuring its system around Ontario Health, created Ontario Health Teams to serve as place-based integrators. In theory, these teams could drive innovation. But without real decision-making authority or control over resources, most remain limited in scope—expected to innovate without the tools or autonomy to do so.
Both provinces reflect the same paradox: Systems are centralizing precisely when their technologies demand decentralization.
Value-based care is often used to justify centralization under the belief that uniform metrics produce efficiency. Yet value is fundamentally contextual. A frail elderly patient in Montreal may define value as the ability to stay home safely. A family in Nunavik may define it as reliable access to basic care without extensive travel. AI can help tailor interventions to these distinct realities—if local teams have the autonomy to use it well. Value emerges from learning, not from control. And learning happens at the edges of systems far more than at the center.
The pandemic offered a powerful, concrete example of antifragility. Some organizations responded not only by coping, but by transforming: Virtual care adoption accelerated by years, workflows were redesigned in days, and cross-departmental collaboration deepened. New models of care emerged—many of which continue today. These organizations emerged stronger—antifragile in the Taleb sense—because the crisis created space for local experimentation and rapid learning. They didn’t just survive the shock; they improved from it. AI could supercharge this antifragility, but only in systems that allow frontline creativity, rapid iteration and real-time feedback.

To unlock AI’s potential, health systems must adopt governance that reflects the architecture of the technology. This includes federated data governance supported by real policy levers: data trusts to ensure transparent and ethical use of shared information; incentives for local innovation; shared accountability frameworks based on real-world outcomes rather than compliance checklists; privacy-by-design standards; and interoperability requirements that allow local tools to integrate smoothly with provincial and national systems. This hybrid approach preserves public accountability while empowering clinicians and communities to learn and innovate.
Governance should emphasize accountability for learning, not only compliance. Organizations should be evaluated by their ability to respond to new insights, spread successful innovations, and demonstrate improved outcomes over time. Leaders must become stewards of learning—equipped in complexity science, AI literacy, ethics, value-based innovation and community engagement. In an AI-enabled world, leadership must shift from command-and-control to cultivating adaptive capacity.
The future will belong not only to those who learn fastest, but to those who build systems that help everyone learn together to improve lives.
The question facing policymakers is no longer “Should we use AI?” It is: “Can our health system learn from AI fast enough to keep up with it?” If reforms smother variation and slow learning, they will fail—regardless of how advanced the technology may be. Centralization may offer the illusion of control, but it often produces fragility, ossification and slow response in the face of new challenges.
AI is the most powerful amplifier of intelligence humanity has ever created. But it will only achieve its promise if our health systems are designed to learn—quickly, continuously and collaboratively. The future will belong not only to those who learn fastest, but to those who build systems that help everyone learn together to improve lives.
This is a call to shared learning stewardship. Policymakers must design systems that enable learning at every level. Clinicians must lead as innovators, not merely implementers. Citizens must expect—and demand—health systems that evolve alongside their needs. If we embrace this stewardship, we can build health systems that are not only more efficient, but more intelligent, more resilient, and more humane.
Dr. Lawrence Rosenberg is a member of the Newsweek CEO Circle, an invite-only executive community of subscribers.