Pennsylvania lawmakers want to make sure humans are still involved in health care decisions that rely on artificial intelligence.
Bipartisan legislation introduced this month in the House of Representatives would require health care providers and insurers to be transparent about how they use artificial intelligence and ensure that humans review all assessments made by AI. Providers and insurers also would be mandated to provide evidence that their uses of AI minimize bias and discrimination prohibited by law.
MORE: No matter your age, it’s never too late to reap the benefits of a healthy lifestyle
AI has a range of applications in health care — from AI chatbots that offer simple care or answer questions about insurance coverage to algorithms that interpret medical images to the filing of visitation notes into patient files.
But because AI technologies are trained on existing medical records and treatment data, they can perpetuate the biases within them. For instance, an AI program used by several health systems prioritized healthier white patients over sicker Black patients to receive additional care management, Harvard Medical School notes. Rather than training the program on the patients’ care needs, it was trained on cost data.
A Rutgers University study also found that AI algorithms can perpetuate false assumptions because they rely on data that can lead to generalizations about people of color. Algorithms struggle to account for social determinants of health, like access to transportation, healthy food costs and work schedules. This may make it harder for patients to follow treatment plans that require frequent doctors visits, exercise and other measures.
Rep. Tarik Khan, a nurse practitioner who co-sponsored the bill, said the idea isn’t to remove AI from health care, but to put some guardrails in place.
“Something as rich and as dynamic as AI, we have to make sure we’re very deliberate, especially when we’re getting into science, we have to make sure that the computer doesn’t take over,” said Khan, a Democrat from Philadelphia. “We have to make sure that people are weighing in, clinicians are making medical decisions, not the computer.”
But Khan said a particular concern is insurers’ use of AI in prior authorization — when patients must receive approval from their insurers before undergoing medical procedures. A report from the American Medical Association noted that, in some cases, AI denied prior authorizations at a rate 16 times higher than typical. A 2024 AMA found that 61% of doctors worried that AI use is increasing prior-authorization denials.
For patients, a denial can mean going into medical debt to get the treatment or deciding not to have it, which Khan said can be life-threatening. Another AMA survey found that 93% of doctors said prior-authorization issues have delayed what they considered to be necessary care, and 29% said those delays caused a serious adverse event resulting in hospitalization, permanent injury or death.
“The concern is that insurance companies are having AI do these denials without a human ever reviewing the case and weighing in,” Khan said. “There is a lack of transparency of when it’s happening, how often it’s happening, who’s using it, who’s not using it, and we think that the public has a right to know, especially with something as sensitive as health care, which is very personal for people.”
Khan said AI can be useful in health care, particularly in analyzing data that allows providers to draw medical conclusions. But he said AI needs human review and patients need to be aware that it is being used, even if it’s just used by insurers to craft letters to patients. To Kahn, it’s important that final decisions are made by someone with medical training, which AI cannot offer.
Khan said the bill’s regulations will impact the current uses of AI, but he also wants them in place to protect patients as the technology continues to evolve.
Pennsylvania is one of many states considering legislation that regulates AI. States including Arizona, Maryland and Texas have blocked AI from being the sole decision-maker in prior authorizations. Other states have said AI can’t present itself as a health care provider or added guidance for AI chat bots in mental health treatment.
“The technology is evolving so rapidly that we have to make sure that we’re thinking of or being on top of scenarios that are changing,” Khan said. “We have to make sure that there are appropriate guardrails.”
The Pennsylvania bill was introduced by a bipartisan group of state representatives, including Joe Hogan (R-Bucks County) and Greg Scott (D-Montgomery County). The legislation has been referred to the House Communications and Technology Committee, where is will get further review.