Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
Asking a general-use chatbot for health help used to seem like a shot in the dark—just two years ago, a study found that ChatGPT could diagnose only 2 in 10 pediatric cases correctly. Among Google Gemini’s early recommendations were eating one small rock a day and using glue to help cheese stick to pizza. Last year, a nutritionist ended up hospitalized after taking ChatGPT’s advice to replace salt in his diet with sodium bromide.
Now A.I. companies have begun releasing health-specific chatbots for both consumers and health care professionals. This month, OpenAI announced ChatGPT Health, which allows regular people to connect their medical records and health data to A.I. for (theoretically) more accurate responses to their health queries. It also released ChatGPT for Healthcare, a service that is already in use by hospitals across the country. OpenAI isn’t the only one—Anthropic announced its own chatbot, Claude for Healthcare, designed to help doctors with day-to-day tasks like retrieving medical records and to help patients better communicate with their providers.
So how could these chatbots be an improvement over regular old chatbots? “When talking about something designed specifically for health care, it should be trained on health care data,” says Torrey Creed, an associate professor of psychiatry researching A.I. at the University of Pennsylvania. This means that a chatbot shouldn’t have the option to pull from unreliable sources like social media. The second difference, she says, is ensuring that users’ private data isn’t sold or used to train models. Chatbots created for the health care sector are required to be HIPAA compliant. Bots that prompt consumers to directly chat with them about symptoms are designed only to connect the dots, and protecting consumer data is a matter of having robust privacy settings.
I spoke to Raina Merchant, the executive director of the Center for Health Care Transformation and Innovation at UPenn, about what patients need to know as they navigate the changing A.I. medical landscape, and how doctors are already applying the tech. Merchant says A.I. has a lot of potential—but that, for now, it should be used with caution.
How is the health care system currently using these chatbots and A.I.?
It’s a really exciting area. At Penn, we have a program called Chart Hero, which can be thought of like a ChatGPT embedded into a patient’s health record. It’s an A.I. agent I can prompt with specific questions to help find information in a chart or make calculations for risk scores or guidance. Since it’s all embedded, I don’t have to go look at separate sources.
Using it, I can spend more time really talking to patients and have more of that human connection—because I’m spending less time doing chart digging or synthesizing information from different areas. It’s been a real game changer.
There’s a lot of work in the ambient space, where A.I. can listen after patients have consented and help generate notes. Then there’s also a lot of work in messaging interfaces. We have a portal where patients can send questions at any time using A.I. to help identify ways, still with a human in the loop, to be able to accurately answer information.
What does having a human in the loop look like?
Many hospital chatbots are intentionally supervised by humans. What might feel automated is often supported by people behind the scenes. Having a human makes sure that there are some checks and balances.
So a completely consumer-facing product like ChatGPT Health wouldn’t have a human in the loop. You can just sit on the couch by yourself and have A.I. answer your health questions. What would you recommend that patients use ChatGPT Health for? What are the limitations?
I think of A.I. chatbots as tools. They are not clinicians. Their goal is to make care easier to access and navigate. They are good at guidance, but not so much judgment. They can help you understand next steps, but I wouldn’t use them for making medical decisions.
I really like the idea of using it to think through questions to ask your doctor. Going to a medical appointment, people can have certain emotions. Feeling like you’re going in more prepared, that you thought of all the questions, can be good.
Let’s say I have a low-grade fever. Is it a good idea to ask ChatGPT Health what to do?
If you are at the point of making a decision, that’s when I would engage a physician. I see real value in using the chatbot as a tool for understanding next steps but not for making a decision.
So how reliable are these new health chatbots at diagnosing conditions?
They have a tremendous amount of information that can be informative for both patients and clinicians. What we don’t know yet is when they hallucinate, or when they veer from guidelines or recommendations.
It won’t be clear when the bot is making something up.
There’s a couple things that I tell patients: Check for consistency, go to trusted sources to validate information, and trust their instincts. If something sounds too good to be true, have a certain amount of hesitancy making any decisions based on the bot’s information.

Alison Block
I’m a Doctor. I Never in a Million Years Thought I’d Do What I’m Doing Now to Connect With Patients.
Read More
What sources should patients be using to verify A.I.?
Should You Take Advantage of the Latest Controversial Development in Health Care? Your Doctor Already Is.
I rely on the big recognizable names, like information from the American Heart Association or other large medical associations that might have guidelines or recommendations. When it gets to the question “Should I trust the chatbot?,” that’s probably when it’s valuable to work with your health care professional.
Is the data that patients put into health chatbots secure?
My recommendation for any patient would be to not share personal details, like your name, address, medical record number, or prescription IDs, because it’s not the environment we use for protecting patient information—in the same way that I wouldn’t enter my Social Security number into a random website or Google interface.
Does this include health care chatbots provided through hospitals or health centers?
If a hospital is providing a chatbot and [is very clear and transparent] about how the information is being used, and health information is protected, then I would feel comfortable entering my information there. But for something that didn’t have transparency around who owns the data, how it’s used, etc., I would not share my personal details.

Sign up for Slate’s evening newsletter.