At 3 a.m., everything feels worse. That unfamiliar ache in your side? Not awful, but persistent. You know better than to panic-search symptoms — but your doctor’s office won’t open for hours, and Google catastrophizes more than it clarifies. So you ask a chatbot instead.

What you get isn’t just information. You get a story.

That’s what makes this moment different from the past 20 years of digital health-seeking, when patients would turn to WebMD or “Dr. Google,” often to their doctors’ dismay. Now, instead of searching, patients are using technology to shape explanations. Generative AI tools don’t just summarize; they simulate conversation. They let people organize thoughts, explore outcomes, and rehearse how they’ll describe what they’re feeling. The result isn’t a diagnosis. It’s a draft.

And that draft is already changing what happens in the exam room.

More people are turning to these tools than most clinicians realize. A recent KFF Health Tracking Poll found that 17% of U.S. adults — and a higher percentage of younger adults — have used generative tools to ask health-related questions. The behavior is already here. What matters now is understanding how it’s shaping the conversation.


AI should come with green, yellow, and red lights for mental health

As a clinical psychologist, I’ve spent decades helping people make sense of confusion and anxiety. One of the most powerful tools in therapy is narrative: not just what happened, but how it’s told. People don’t simply recall events — they craft them into meaning. A rehearsed story feels true, even when it’s not. That’s the shift we’re now seeing in health care: uncertainty processed not through facts alone, but through fluent, plausible, practiced explanations.

Years ago, not long after search engines became part of everyday life, my family physician remarked that many of his patients now came in “knowing more about medicine than they ever did.” His job, he said, was no longer to deliver information — it was to help them interpret it. He had excellent social instincts and clinical wisdom, and even then, he could sense that the patient role was shifting.

That shift continues today — but with a deeper twist. Patients aren’t just arriving with facts. They’re arriving with shaped, rehearsed stories.

Symptom searching isn’t new. What’s new is the ability to interact, refine, and rehearse. Patients arrive not with scattered complaints, but with structured narratives — written down, thought through, sometimes emotionally processed. That structure shapes the conversation. It changes what gets shared, how it’s framed, and how open a patient might be to hearing something different.

In a recent JAMA essay a physician described a patient who came in with dizziness and used strikingly clinical language: “It’s not vertigo, more of a presyncope kind of feeling.” When the doctor asked if she worked in health care, the patient said no — she’d used a chatbot to prepare for the appointment.

“It felt like someone else was in the room,” the doctor wrote.

That line captures where we are. When a patient feels confident in a story — even a wrong one — it becomes harder to revise. That’s the new clinical challenge: not just gathering history, but renegotiating it. If a tool steers someone toward a reassuring but inaccurate explanation, a clinician may have to reopen uncertainty that feels already resolved.

A friend of mine used a chatbot to understand her persistent nausea. It suggested indigestion or stress. Reassured, she delayed seeking care. A week later, her doctor diagnosed gallstones. The tool hadn’t been blatantly wrong — but its tone and narrative coherence diverted her attention and cost her time.

Another colleague told me about a teenager who came in with low mood and fatigue. Before the visit, she’d turned to a chatbot and concluded she had a dietary deficiency. By the time she reached the clinic, she was already taking supplements and resistant to exploring emotional factors. The tool hadn’t dismissed mental health — it had simply offered an explanation she preferred, and that subtly rerouted the encounter.

This is the risk: fluency that feels like accuracy. In structured settings, generative tools can perform impressively. GPT-4, for instance, has scored over 90% on standardized clinical vignettes — outperforming physicians in some studies focused on cardiac symptom triage and complex gastrointestinal cases. But in real-world use by laypeople, accuracy drops sharply. A 2025 preprint found that diagnostic accuracy fell to around 35% when used without clinical framing. The tool isn’t failing — the question often is.


As reports of ‘AI psychosis’ spread, clinicians scramble to understand how chatbots can spark delusions

But it doesn’t have to be this way. I’ve spoken with friends, family members, and patients about how they use these tools when preparing for visits. I’ve seen how much the output changes when users shift their inputs from vague to focused — from “What’s wrong with me?” to “Here’s what I’m feeling, here’s what I’m worried about, and here’s what I want to ask my doctor.”

The difference is not about getting a better answer. It’s about organizing thought. Prompts like “I’m preparing to see my doctor and want to describe my symptoms clearly” often lead to structured, practical guidance: timelines, symptom tracking, questions to raise. That kind of organization doesn’t replace care — but it can reshape it.

The issue isn’t just what these tools can do. It’s whether people are taught how to use them well. Most patients don’t know what memory functions are. Many don’t realize that unclear questions lead to unclear — and sometimes misleading — replies. What’s needed is not technical training, but conversational guidance.

Clinicians can help. Instead of pretending patients arrive with a blank slate, they should ask:  

“Did you look anything up before coming in?”

“Did you use any tools to think it through?”

“What were you hoping the problem might be — or hoping it wasn’t?”

These questions invite the story that’s already formed. They help uncover the mental path a patient has already walked — and create space for co-creation rather than correction.

Health systems and digital platforms can support this shift too. Patient portals could offer pre-visit templates or example prompts. Providers could share safe-use language or recommend framing strategies. Even small nudges — like suggesting users say, “Help me prepare for my visit” instead of “What’s wrong with me?” — can lead to more productive interactions.

We should be teaching people how to build personal health narratives — drafts that are clear but revisable, reflective but not prematurely certain. The goal isn’t to limit autonomy. It’s to preserve flexibility, so the story that gets told can still change when it needs to.

Patients used to arrive with scattered symptoms and search history. Now, many arrive with a narrative already formed. That story can be useful. It can be misleading. Either way, it changes the conversation.

The era of raw symptom search is over. This is the era of narrative rehearsal. And if clinicians don’t start listening for the stories patients have already begun to tell themselves, they’ll lose the chance to shape how those stories end.

Harvey Lieberman, Ph.D., is a clinical psychologist and consultant who has led major mental health programs and now writes on the intersection of care and technology. His recent New York Times guest essay, “I’m a Therapist. ChatGPT Is Eerily Effective,” explored his year-long experiment using AI in therapy.