And now OpenAI is launching a new service, ChatGPT Health, dedicated to medical information. Here you can directly link the AI to your personal medical records as well as stored data in smart devices like the heartbeat and respiration data from an Apple Watch. OpenAI’s archrival Anthropic has followed suit by adding similar features to premium versions of its Claude chatbot.
Get Starting Point
A guide through the most important stories of the morning, delivered Monday through Friday.
But there’s good reason to be wary. The new services vow to protect your data and not use it to train their systems. But the companies aren’t covered by the federal law that protects the privacy of medical data. So you’ll have to take their word for it.
Besides, AIs often make mistakes. They can easily be thrown off by a user’s inexact choice of words. And they can even mislead users by providing them with too little information.
Still, physicians now take it for granted that patients will be armed with AI-generated advice. “It used to be Google searches, WebMD. Now it’s ChatGPT, Perplexity, Claude,” said Bhargav Patel, a child and adolescent psychiatrist at Brown University and former chief medical officer at medical AI firm Sully.AI. “Patients are like, ‘Oh, well, I was talking to ChatGPT about my symptoms and my medications and this is what it said.’”
This is often a good thing, said Patel. But only up to a point. “When you’re a nonexpert in the area, you don’t know when the AI has hallucinated and it’s just making something up.” Unless there’s a human in the loop, getting your medical advice from an AI can be something of a crapshoot.
In an article published last August, doctors at the University of Washington told of a patient who suffered from paranoia and hallucinations after consuming sodium bromide as an alternative to table salt. The 60-year-old man said he had gotten the idea from ChatGPT, which had told him that bromide was a satisfactory alternative.
Ask the same question today, and ChatGPT bluntly warns that “sodium bromide is not safe as a food salt.”
But when AI is backstopped by competent human physicians, the results can be impressive. Consider the case of Joe Gaddy, a financial technology consultant in Attleboro.
Several years ago, when he was 45, Gaddy learned he had an enlarged prostate. His father had died of prostate cancer at 76. Besides, Black men like Gaddy contract the disease at much higher rates than the overall US population.
At the time Gaddy’s urologist told him “there’s some surgical options that are available, but none of them are really all that appealing.” Gaddy agreed, and for years relied on a careful diet and plenty of exercise.
But last year, after getting his annual MRI tests, Gaddy, now 51, decided to feed the results into ChatGPT. Up popped an answer his urologist hadn’t mentioned, a robotic surgical procedure which uses an intense jet of water to scrape away excess prostate tissue.
“Even my doctors didn’t really know about this procedure,” said Gaddy.
He shared the idea with his primary care doctor, who had begun using his own AI system. “He plugged the procedure into his AI . . . it came back and he was like, yeah, this is a great procedure.”
Next Gaddy asked ChatGPT for doctors who perform the procedure, and the chatbot directed him to a surgeon in Worcester.
“We set up a meeting,” said Gaddy. “Just from talking to him, he kind of gave me peace of mind that . . . everything’s going to be OK.”
Gaddy had the surgery in mid-December. He’s on antibiotics for an infection, but otherwise, so far so good.
Gaddy’s story features AI-powered health care at its best, a potent partnership of humans and machines. But some users may not seek out advice from real doctors.
In fact, some can’t — users with little money and no health insurance, or those who live far from hospitals or clinics. Marzyeh Ghassemi, a professor at the Massachusetts Institute of Technology, fears that such households are especially at risk from unreliable medical AIs.
“What I’m really, really worried about is economically disadvantaged communities,” said Ghassemi, “You might not have access to a health care professional who you can quickly call and say, ‘Hey . . . should I listen to this?’”
The user’s education level or cultural background can also cause problems. Last year, a study coauthored by Ghassemi proved that imprecise wording of a question can lead a medical AI to generate false answers.
Monica Agrawal, an assistant professor of bioinformatics and computer science at Duke University who earned her doctorate at MIT in 2023, said that because AI models are trained on precise medical jargon, they may be flummoxed by questions asked in inexact or unscientific language.
“No patient I have ever met has ever phrased their question like a two-paragraph medical exam question,” said Agrawal.
She also worries that AIs are too literal, and therefore fail to understand what their users really need to know. This can result in dangerously slanted advice.
In a recent paper coauthored by Agrawal, a team of researchers asked Google and Perplexity AIs about the risks of certain medical procedures.
“You might see a patient search for something like, ‘What are the risks of my surgery?’ ” said Agrawal.
Sure enough, in about 90 percent of cases, the AIs rattled off a list of potential risks for various procedures, but not one word about possible benefits.
If you asked a human doctor about the risks of, say, a double mastectomy, they’d know what you’re really asking: “I may have breast cancer. What should I do?” So along with describing the risks, a human doctor would cheer you up with positive information about various treatment options, including mastectomies, which save thousands of lives every year. But an AI might never mention the upside of getting treated — just the risks, because that’s what you asked for.
“So if you’re a patient who was a little bit anxious already about the surgery, it could make you a lot more anxious,” said Agrawal. In fact, it could cause a nervous patient to refuse a life-saving treatment until it’s too late.
Agrawal’s paper says that until AIs become more trustworthy, people are better off getting their health information from high-quality medical websites run by trusted institutions like the Mayo Clinic. There they can find a full spectrum of advice, written by humans who understand what worried readers really want to know.
And then there’s privacy. How much of our health data can we safely share with an AI?
ChatGPT Health and Claude encourage users to upload their complete medical histories, as well as a steady stream of health data like heart rate and blood oxygen levels, collected by the user’s smart watch.
No thanks, said Ghassemi. She’s fine with asking an AI to decode bewildering medical jargon, but believes that sharing our most sensitive medical data is too great a risk.
“I would say it’s a bad thing,” she said. “Don’t do it.”
Hiawatha Bray can be reached at hiawatha.bray@globe.com. Follow him @GlobeTechLab.