Hallucinations are a frequent point of concern in conversations about AI in healthcare. But what do they actually mean in practice? This was the topic of discussion during a panel held last week at the MedCity INVEST Digital Health Conference in Dallas.

According to Soumi Saha, senior vice president of government affairs at Premier Inc. and moderator of the session, AI hallucinations are when AI “uses its imagination,” which can sometimes hurt patients because it could be providing wrong information.

One of the panelists — Jennifer Goldsack, founder and CEO of the Digital Medicine Society — described AI hallucinations as the “tech equivalent of bullshit.” Randi Seigel, partner at Manatt, Phelps & Phillips, defined it as when AI makes something up, “but it sounds like it’s a fact, so you don’t want to question it.” Lastly, Gigi Yuen, chief data and AI officer of Cohere Health, said hallucinations are when AI is “not grounded” and “not humble.”

But are hallucinations always bad? Saha posed this question to the panelists, wondering if a hallucination can help people “identify a potential gap in the data or a gap in the research” that shows the need to do more.

Yuen said that hallucinations are bad when the user doesn’t know that the AI is hallucinating.

However, “I will be completely happy to have a brainstorming conversation with my AI chatbot, if it’s willing to share with me how comfortable they are with what they say,” she noted.

Goldsack equated AI hallucinations to clinical trials data, arguing that missing data can actually tell researchers something. For example, when conducting clinical trials on mental health, missing data can actually be a signal that someone is doing really well because they’re “living their life” instead of daily recording their symptoms. However, the healthcare industry often uses blaming language when there is missing data, stating that there is a lack of adherence among patients, instead of reflecting on what the missing data actually means.

She added that the healthcare industry tends to put a lot of “value judgments onto technology,” but technology “doesn’t have a sense of values.” So if the healthcare industry experiences hallucinations with AI, it’s up to humans to be curious about why there’s a hallucination and use critical thinking.

“If we can’t make these tools work for us, it’s unclear to me how we actually have a sustainable healthcare system in the future,” Goldsack said. “So I think we have a responsibility to be curious and to be sort of on the lookout for these sorts of things, and thinking about how we actually compare and contrast with other legal frameworks, at least as a jumping off point.”

Seigel of Manatt, Phelps & Phillips, meanwhile, stressed the importance of squeezing AI into the curriculum for med and nursing students, including how to understand it and ask questions.

“It certainly isn’t going to be sufficient to click through a course in your annual training that you’re spending three hours doing already to tell you how to train on AI. … I think it has to be iterative, and not just something that’s taught one time and then part of some refresher course that you click through during all the other annual trainings,” she said.