“My watch saved my life.”

Liam — not his real name — is a 75-year-old retired teacher in Boston. Two years ago, his son-in-law gave him an Apple Watch. Soon after, it began flagging something strange: possible atrial fibrillation.

His cardiologist glanced at the alerts Liam presented, referred to an older EKG, and dismissed the idea.

Back home, the irregular rhythms continued. A week later, feeling worse, Liam called his nurse practitioner, who told him to get to the emergency department immediately. There, doctors confirmed “a bad case” of Afib and shocked his heart back into a normal rhythm.

“The watch caught it earlier than the doctors,” Liam says.

I include his story in my new book “Dr. Bot” because it captures an emerging truth we can’t afford to ignore: Health care is already a team sport between humans and machines — but right now, we’re all playing it badly. The teams are unbalanced, the rules unwritten, and too often, the patient is left on the sidelines.


Why AI like Grok isn’t ready for the radiology big leagues

I’ve been a health care researcher for nearly 20 years, working in medical schools in the U.S., U.K., and now Sweden. My background is in the philosophy of science and mind, which might seem like a strange fit for discussing health care in the AI era. But for AI to fulfill its promise in medicine, expertise like mine can’t remain peripheral. Domain experts in how humans and machines think are essential to optimizing patient care.

In chess, Garry Kasparov famously observed that the strongest player is not the best human or the best machine, but a “weak” human paired with a strong machine and a better process. In this sense, “weak” doesn’t mean incompetent — it means a human who knows when to let the machine lead.

Applied to medicine, the analogy sounds appealing: Keep the doctor, add the AI, get better results. But in practice, the “human” is almost always assumed to be a traditionally trained doctor. We seldom ask whether a doctor is necessarily the right human for the role — or whether, and when, the role may not need a human at all.

This bias runs deep. In the 1950s, Paul Meehl showed that statistical models often outperform clinicians, yet decades later Berkeley Dietvorst found people still abandon algorithms after minor errors. If we accept Kasparov’s framing uncritically, we risk hard-coding these biases into future systems — preserving doctors’ authority at the expense of patient outcomes.

Consider Matt Might, a computer scientist and rare disease advocate who trains undergraduates at the University of Alabama to use an AI tool called MediKanren. The system scans vast biomedical datasets, spotting hidden links between symptoms, diseases, and potential treatments.

In six years, he told me, they’ve worked with 600 patients, and in nearly half of those cases their suggestions pointed to actionable treatment options that doctors could meaningfully pursue. That success comes from process: running targeted queries, curating results, and delivering them in a form that physicians can use.

This is Kasparov’s formula in action: a “weak” human paired with a capable machine and, crucially, a strong process. The students aren’t doctors, yet their structured collaboration with AI apparently improves care. If undergraduates can deliver value like this, why assume doctors must sit at the center of every reasoning task? The real question is: Who — or what — gets the best result for the patient?

Part of the problem is that we design health care around the assumption that human judgment — specifically doctors’ judgment — is the gold standard. But patients don’t seek care for a doctor’s opinion as an end in itself; they seek health.


Medicine’s AI era urgently demands new doctor-patient relationship

As Oxford AI scholars Richard and Daniel Susskind ask: What is the question to which human judgment is the answer? Framing it this way forces us to design systems around outcomes, not egos. The goal isn’t to preserve a role — it’s to get the best result for the patient, whether that comes from a human, a machine, or a process neither could execute alone.

For nearly a decade, my surveys have found physicians view AI mainly as a way to shed paperwork and “get back to doctoring” — assuming doctoring is fixed. But cognitive science shows humans hit a ceiling even in optimal systems. AI raises that ceiling, so the very architecture of the profession must change.

Many leading doctors, even celebrated health informatics visionaries, call medicine an “information processing field,” yet rarely explain what that actually entails.

In “Dr. Bot,” I unpack this claim and argue that such questions lie squarely in the domain of cognitive science and related fields that study how humans and machines think. Yet we are still building health care on analog-era assumptions in a digital century. AI literacy is minimal. Cognitive science barely features.

Moreover, if AI is going to reshape the cognitive division of labor in health care, it’s not just education that needs reform — it’s the entire set of systems and workflows that determine how humans and machines work together. Failing to redesign those is not just shortsighted; it’s structurally unsound.

We need our brightest interdisciplinary minds — philosophers, cognitive scientists, ethicists, data scientists — working alongside clinicians to decide which tasks belong to humans, which to machines, and which to redesigned processes neither could perform alone. Sometimes leadership will shift to colleagues who aren’t doctors at all.

AI in medicine isn’t about bolting automation onto old workflows. It’s about deciding, with moral and cognitive clarity, what the medical profession is for. That may redistribute authority, status, and even the definition of “doctor.” If we can face that, we can design a profession — or, more accurately, professions — that match the realities of 21st-century cognition, technology, and patient needs.

If we can’t, we’ll keep mistaking motion for progress. And patients like Liam will keep finding their answers elsewhere.

Charlotte Blease, Ph.D., is associate professor at the Department of Women’s and Children’s Health, Uppsala University and research affiliate at digital psychiatry, Department of Psychiatry, Beth Israel Deaconess Medical Center. Her book, “Dr. Bot: Why Doctors Can Fail Us and How AI Could Save Lives,” is out now from Yale University Press.