After OpenAI and Anthropic launched dedicated health care initiatives in January, a study published in February found that OpenAI’s ChatGPT Health had a 50% error rate, incorrectly recommending that care be delayed in emergency test cases half the time. 

That error rate, which was not identified before the app was rolled out, is a symptom of a broader problem: the rapid adoption of AI systems by health care systems and insurers, often skipping essential testing to determine how well these systems work and how safe they are for patients. This push to expand AI in health care is intensifying an existing trust crisis.

The decline of trust in health care in the U.S. has been ongoing and was worsened by the institutional responses to the Covid-19 pandemic. A national survey of more than 443,000 U.S. adults found trust in physicians and hospitals fell more than 30 percentage points between 2020 and 2024, from 72% to 40%, with declines across multiple sociodemographic groups. For BlackLatine, and Indigenous communities, this collapse layers onto preexisting medical mistrust rooted in a legacy and ongoing history of medical racism in the U.S. health care system. Research shows that patients who distrust their health care providers are more likely to delay care, including preventive screenings, and discontinue their medications, and that those patterns are associated with higher rates of hospitalization and premature death.

AI’s documented harms compound this mistrust. For example, a widely cited algorithm affecting an estimated 200 million Americans systematically underestimated how sick Black patients were, after using medical expenses as a measure for illness. Patients were unaware that this tool was being used to determine the level of their care. Medicare Advantage insurers used AI tools that helped to double their denial rate for elderly patients; about 75% of the denials were overturned on appeal, but fewer than 1% of patients ever appealed. The federal government has since launched a pilot of AI-enabled prior authorization into traditional Medicare in six states.

Would you trust AI to renew your drug prescriptions?

Health care, accounting for $5.3 trillion or 18% of the GDP in 2024, is being heavily pursued by the AI industry. U.S. health organizations spent $1.4 billion on AI tools in 2025, nearly three times what they spent the previous year, for a range of functions, including analyzing medical images and automating billing and documentation. In addition to potential profits, the sector also provides what AI companies need to operate and, in many cases, to build and improve their systems: data, and a lot of it. This includes data in the form of electronic health records, insurance claims, diagnostic images, and genetic profiles of hundreds of millions of Americans, often collected without meaningful transparency about how it will be used and with no input from patients and communities.

The data show that AI’s rapid adoption in health care is worsening the mistrust that Americans already have about our health care system. A February 2025 study that surveyed more than 2,000 Americans found that 66% reported low trust in their health care system to use AI responsibly, and 58% reported that their health care system would ensure an AI tool would not harm them.

Neither knowledge about AI nor health literacy changed these findings. The most important predictor was how much someone already trusted the health care system.

In a nationally representative survey, most patients said they wanted to know when AI was used in their diagnosis and treatment, yet there is no federal law requiring disclosure, and only a handful of states currently have laws to address this. When patients are not informed about what is happening to them or their data, and no one is required to share that information with them, it impacts all patients, but particularly those communities with the least trust to lose.

Patients who have experienced discrimination in health care are significantly less likely to trust health systems to use AI responsibly. Rolling out AI systems without meaningfully involving patients and communities in the decision-making only repeats the pattern that led to the mistrust in the first place.


STAT Plus: Sometimes, it would be unethical not to use AI in medicine

What needs to change is who contributes to decisions about how AI tools are purchased, governed, and used. Patients and community members need formal decision-making roles, not just advisory positions. Health care systems and insurers need to publicly report performance, including across different racial/ethnic groups, before AI tools are rolled out. Patients need to be told clearly and in advance when AI is being used in their care. These are the basic conditions for a trustworthy system.

Health care systems and companies can make different choices, choices that earn the trust of their patients and the communities they serve. They have the capacity to move fast. The harder work is moving at the speed of trust. That means patients and community members have a say before these systems are even purchased, not after harm has been done.

Oni Blackstock, M.D., M.H.S., is a physician-researcher, founder and executive director of Health Justice, and a Public Voices fellow on technology in the public interest with the OpEd Project.