AI front door

AI is reshaping how patients access health information and healthcare.

getty

It’s 1961. A middle-aged housewife named Mary, worn down by months of unrelenting fatigue, finally visits her family physician. He interviews and curtly examines her, then orders blood tests. When the results reveal severe anemia, he has his nurse call with instructions to see a gynecologist, deciding not to trouble Mary with the details. More tests follow. A week later, the family physician calls to say she needs a hysterectomy, never mentioning the word “cancer,” a standard practice at the time. With her doctor holding all the medical knowledge, she could only wait for whatever he chose to share.

American medicine had long rested on this implicit “grand bargain.” Physicians monopolized knowledge, promising to use it for the public good. In return, society granted them the privilege of regulating themselves.

Yet there was always an undercurrent of resistance. In the 19th century, families relied on guides like Buchan’s Domestic Medicine, while the Popular Health Movement pressed people to care for themselves and question physicians’ authority.

In the 20th century, mass media in the form of radio shows like The Voice of Medicine, magazines like Prevention, figures like Dr. Spock, and books like Our Bodies, Ourselves gave laypeople license to reclaim intimate knowledge from the profession.

I entered medicine in the early 2000s, just as the information playing field was shifting dramatically. Patients began arriving for appointments with printouts from Google searches, prompting eye rolls from some of my senior attendings. Soon, medical websites, online patient communities, and wearable technologies fueled a burgeoning ePatient movement, with some patients asserting more control over their health and care decisions.

Two decades later, we’re experiencing another shift. As medicine struggles to uphold its side of the “grand bargain,” artificial intelligence is emerging as an information stream like no other. AI extends the long trend of democratizing medical knowledge, but this time feels less like an evolution and more like a leap. Let me explain why.

Chat Is Replacing Search

Over the past year, traffic to traditional health websites has dropped by one-third, as consumers increasingly turn to products like ChatGPT, Gemini, Claude, and Grok. Nearly 6% of ChatGPT’s 2.5 billion daily messages—more than 150 million every day—now touch on health, fitness, beauty, or self-care.

The advantages over search are clear. Instead of sifting through a sea of blue links and webpages, users can ask direct questions and receive immediate, tailored responses.

These tools are also highly engaging, moving beyond text to voice-based conversations. If you haven’t tried voice mode, you should. The technology has crossed the “uncanny valley”—it feels like magic. Some people even form deep emotional bonds with their bots.

AI-based chat tools do more than help patients find answers. As “ePatient Dave” deBronkart told me, AI provides the clinical reasoning that helps people make sense of raw information from websites, wearables, and diagnostic tests. And by simulating conversation, these tools help patients organize their thoughts, explore options, interpret results, and prepare clearer narratives for their clinicians.

But the limitations are real. While we hear amazing stories of ChatGPT uncovering missed diagnoses, we rarely hear about the times it misleads or provides false reassurance. AI can provide world-class advice, yet commit errors a third-year medical student would not.

Unreliable outputs stem partly from the training data (often the entire internet) and partly from the probabilistic nature of the underlying models. And because these products are designed for engagement, they can be overly agreeable and even sycophantic. Yet despite the lack of proven safeguards, AI companies have removed many disclaimers and stopped reminding users that their bots are not doctors.

Additionally, these general-purpose consumer products often lack context about the user, particularly their medical history. But, as Brendan Keeler (aka Health API Guy) explained to me, initiatives like the CMS Health Tech Ecosystem Initiative and TEFCA Individual Access Services will make it much easier for these apps to incorporate EHR data.

Finally, while information can empower, it isn’t always enough for health. For example, a college student wakes up with pink eye, enters her symptoms into an AI tool, and gets a likely answer—viral conjunctivitis—along with simple guidance: hand hygiene, cold compresses, watchful waiting. She’s reassured and moves on.

But many cases aren’t that straightforward. Another student has had bloody diarrhea for weeks. AI offers him a range of possibilities—infection, IBD, hemorrhoids, and even cancer. His grandfather had colon cancer. His cousin has celiac disease. He’s unsure. Now what?

The point is that people don’t just want information—they want solutions. And for that, clinicians remain essential. Health systems and clinical practices are also starting to leverage patient-facing AI tools.

Chat is replacing search

gettyHealthcare Organizations Are Embracing Consumer-Facing AI

Over the past two decades, health systems and medical practices have built “digital front doors” (websites and portals) where patients can find information, schedule visits, request refills, review results, and message their care teams.

These portals are widely used, but often “impersonal and reactive, built on population-level assumptions,” explained Define Venture’s Lynne Chou O’Keefe. Too often, patients are left to message their doctors and wait—sometimes indefinitely—for a response. Meanwhile, an ever-rising tide of portal messages overwhelms clinical teams.

With AI, “every individual can have their own ‘door,’ shaped dynamically by their risk factors, medical history, behavioral signals, and engagement patterns.” This level of personalization and immediacy was simply not possible before.

EHR vendors are racing to add AI to their portals. Both Epic and Oracle recently announced AI features to help patients schedule appointments, manage bills, search medical records, assess symptoms, and navigate care. Compared to general-purpose products like ChatGPT, these tools are integrated with the EHR, making them contextually aware of each patient’s health history and able to connect them directly with care.

Another advantage is increased trust and privacy. Consumers are far more willing to share their health data with their provider than with a tech company. Epic VP Sean Bina told me that its AI agent “Emmie is built with privacy at the core. It is designed to provide useful assistance bounded by clear safety guardrails, including the ability to escalate to care teams.”

The challenge is calibrating those guardrails. If the AI is too constrained, patients may simply message their clinical team or turn to general-purpose tools like ChatGPT. However, leaving the guardrails too loose exposes health systems to potential clinical errors and legal risks.

Startups are layering AI into EHR portals as well. For example, Hyro uses AI agents to streamline routine, repetitive administrative tasks, such as scheduling, asking administrative questions, managing bills, and refilling medications.

The symptom-checking app Ada uses probabilistic AI to suggest possible diagnoses and direct patients to appropriate care—for example, flagging symptoms consistent with diabetic ketoacidosis and recommending that patients be evaluated immediately in a local emergency department. Unlike LLMs, Ada’s architecture is deterministic, explainable, and auditable.

K-Health takes the model a step further. Patients begin with an AI chat that, using EHR context, suggests potential diagnoses, performs triage, and recommends next steps. They can then immediately transition into a video visit, where a health system clinician reviews the AI output, asks additional questions, and, using the EHR, writes a note and orders tests, prescriptions, and more all within the partnering health system. As Chief Product Officer Ran Shaul notes, “K-Health is fully integrated into the system of care.”

Still, critics contend that these applications primarily serve health systems and clinicians rather than their patients. Additionally, these apps remain bound by many of the constraints of traditional delivery models. The upstart practices we’ll discuss next operate with more freedom.

AI-Native Providers Are Creating New Patient Experiences

A wave of startups is building AI-first care models from the ground up, offering far more than general-purpose tools like ChatGPT while sidestepping the friction of traditional healthcare practices.

Counsel Health, for example, begins with an AI medical assistant that gathers history and provides personalized guidance. Patients can escalate at any point to a text chat with a Counsel clinician, who may order tests, prescribe medications, or make referrals. Counsel currently focuses on urgent care and medical advice, but plans to broaden its scope in the months ahead.

Curai Health takes a similar AI-first approach to primary care. Patients start by chatting with an AI assistant, which summarizes the conversation before handing it to a primary care clinician. The clinician then connects with the patient asynchronously or via live video. Curai is designed to make primary care more accessible and scalable.

More than 2 million people have already used Doctronic—which positions itself as an “AI doctor”—to query symptoms, interpret test results, and learn about diagnoses or prescriptions. Users receive a free AI-generated summary with recommendations and can opt for a $39 video visit with a clinician for further discussion, prescriptions, and more. The company also runs an AI-native EHR that captures information from chats and other EHRs to inform future interactions. (Disclosure: I am an advisor.)

As these models rapidly evolve, their core challenge is overcoming the constraints of virtual care—limited ability to examine patients, coordinate tests, or manage referrals remotely across countless local markets. This is particularly difficult for individuals with complex illnesses, who account for the bulk of healthcare spending.

Old Tensions, New Questions

New consumer-facing AI products put old tensions in sharper relief. One of the oldest is also the sharpest: who owns medical knowledge? For centuries physicians held it. Now AI is leveling the playing field. But as physicians loosen their hold on knowledge, will tech companies become the new gatekeepers?

Other questions are just as pressing. How do we balance speed with reliability, especially when the cost of errors isn’t always clear up front? Can tools designed for engagement practice the restraint that defines good clinical care? And how does removing one bottleneck, access to knowledge, magnify others that are harder to solve, like coordinating care, managing risk, and aligning incentives?

Industry veteran Tom Lawry captured the moment well: “AI can deliver value when done right. But no one gets it right at the start. The journey is about learning, adapting, and discovering new ways to make it work.”

For patients, AI promises instantly available, tailored expertise to guide decisions and navigate care. The potential is real, but so are the limitations. As ePatient Dave reminds us, “Using AI is a skill to be learned.” Developing these skills and the wisdom to use these tools effectively will take time.

For clinicians like me, the instinct may be to recoil when patients arrive armed not just with information, but with full narratives, often carefully reasoned with a bot. Yet how can we blame them? In their position, we would do the same. By loosening our grip on knowledge as the core marker of our worth, we can refine the skills that now matter more—asking better questions, curating information, and exercising sound judgment—while embracing our evolving roles as guides, interpreters, and trusted partners.

For healthcare organizations, AI offers opportunities to engage patients and relieve overburdened teams. But the lesson of past technologies is clear: progress won’t come from deploying technology alone; it will come from redesigning systems of care. At the same time, AI may redraw the ecosystem’s control points, reshaping competition and determining who receives care—and where.

Healthcare is, at its core, built on trust—a blend of reliability, responsibility, security, and accountability. Today that trust remains local: people place far more confidence in their doctors and friends than in governments or corporations. But how do we preserve trust in a world where convincing answers—right or wrong—come instantly? And how do we safeguard the value of human expertise while widening access to knowledge? The future of medicine hinges on how we answer those questions.

Acknowledgements: I thank the following people for discussing this topic with me: Adel Baluch (Ada Health), Sean Bina (Epic), Aaron Bours (Hyro), Lynne Chou O’Keefe (Define Ventures), ePatient Dave deBronkart, Ali Diab (Collective Health), Rishi Khakhkar (Counsel Health), Davis Liu (Curai Health), Brendan Keeler (HTD Health), Tom Lawry (Second Century Technology), Claire Novorol (Ada Health), Adam Oskowitz (Doctronic), Matt Pavel (Doctronic), Andrew Rebhan (SG2), Jane Sarasohn-Kahn (THINK-Health), Ran Shaul (K-Health), and Michael Turken (MyDoctorFriend).