ChatGPT users around the world send billions of messages every week asking the chatbot for healthcare advice, OpenAI shared in a new report on Monday. Roughly 200 million of ChatGPT’s more than 800 million regular users submit a prompt about healthcare every single week, and more than 40 million do so every single day.

According to anonymized ChatGPT user data, more than half of users ask ChatGPT to check or explore symptoms, while others use it to decode medical jargon or get more information about treatment options. Nearly 2 million of these weekly messages also focused on health insurance, asking ChatGPT to help compare plans or handle claims and billing.

The numbers are somewhat reflective of the troubled state of the American healthcare system, especially as patients struggle to pay exorbitant medical bills.

In its own research, OpenAI found that three in five Americans viewed the current system as broken, with the most major pain point being hospital costs.

The study found that 7-in-10 healthcare-related conversations happen outside of normal clinic hours. On top of that, an average of more than 580,000 healthcare inquiries were sent in “hospital deserts,” aka places in the United States that are more than a 30-minute drive from a general medical or children’s hospital.

The report also showed increasing AI adoption among healthcare professionals. Citing data from the American Medical Association, OpenAI said that 46% of American nurses reported using AI weekly.

The report comes as OpenAI increases its bet on healthcare AI, despite the concerns about accuracy and privacy that come with the technology’s deployment. The company’s CEO of applications, Fidji Simo, said she is “most excited for the breakthroughs that AI will generate in healthcare,” in a press release announcing her new role in July 2025

OpenAI isn’t alone in its big healthcare bet as well. Big tech giants from Google to Palantir have been working on product offerings in the healthcare AI space for years.

Many people think Health AI is a promising field with a lot of potential to ease the burden on medical workers. But it’s also contentious, because AI is prone to mistakes. While a hallucinated response can be an annoying hurdle in many other areas of use, in healthcare, it could have the potential to be a life-or-death matter.

These AI-driven risks are not confined to the world of hypotheticals. According to a report from August 2025, a 60-year-old with no past psychiatric or medical history was hospitalized due to bromide poisoning after following ChatGPT’s recommendation to take the supplement. As the tech stands today, no one should use a chatbot to self-diagnose or treat a medical condition, full stop.

As investment in the technology builds up, so do policy conversations. There is no comprehensive federal framework on AI, much less healthcare AI, but the Trump administration has made it clear that it intends to change that.

In July, OpenAI CEO Sam Altman was one of many tech executives in attendance at the White House’s “Make Health Tech Great Again” event, where Trump announced a private sector initiative to use AI assistants for patient care and share the medical records of Americans across apps and programs from 60 companies.

The FDA is also looking to revamp how it regulates AI deployment in health. The agency published a request for public comment in September 2025, seeking feedback from the medical sector on health AI deployment and evaluations.

OpenAI’s latest report seems to be their own attempt at putting a comment on the public record. The company pairs its findings with sample policy concepts, like asking for full access to the world’s medical data and a clearer regulatory pathway to make AI-infused medical devices for consumer use.

“We urge FDA to move forward and work with industry towards a clear and workable regulatory policy that will facilitate innovation of safe and effective AI medical devices,” the company said in the report.

In the next few months, OpenAI is preparing to release a full policy blueprint for how it wants healthcare AI to be regulated, the company added in the report.