Illinois Governor J. B. Pritzker (Photo by Gage Skidmore)
Earlier this month, Illinois Gov. JB Pritzker signed legislation banning the use of artificial intelligence (AI) for mental health or therapeutic decision-making without oversight by licensed clinicians. The action makes Illinois one of many states cracking down on the rampant use of AI in behavioral health care and patient communications. Now more than 250 bills targeting AI in health care have been proposed in state legislatures across the country, WESA radio reported.
Journalists can find interesting stories by tracking state-level developments, interviewing local legislators, health care providers and other experts on their perspective, or examining the national landscape. The National Conference of State Legislatures maintains a website that tracks assorted AI legislation across the country in areas such as health insurance, health communications and education.
More on the legislation
Illinois’ Wellness and Oversight for Psychological Resources Act prohibits people, corporations and other entities from providing, advertising or offering psychotherapy services to the public unless the services are conducted by licensed professionals. Violations of the act carry civil penalties of up to $10,000. However, the law does permit the use of AI for administrative and supplementary support for behavioral health professionals, Mobihealth News reported. The act was unanimously passed in each chamber of the Illinois General Assembly before Pritzker signed it.
The intent is to protect patients from unregulated AI products and to preserve jobs for qualified behavioral health providers. It’s also designed to protect children from growing concerns surrounding the use of AI chatbots in the mental health arena, according to Healthcare Finance.
“The people of Illinois deserve quality health care from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” said Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, in a prepared statement. “This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”
The bans may result from concerns regarding the potential dangers of therapy from AI chatbots, according to the Washington Post. “Already, cases have emerged of chatbots engaging in harmful conversations with vulnerable people — and of users revealing personal information to chatbots without realizing their conversations were not private,” the article said.
Additional states take or consider action
Several other state legislatures have enacted or are considering their own laws related to AI and behavioral health care. Here are a few examples:
In June, the Nevada state legislature passed a similar law to Illinois’, prohibiting AI providers or people from claiming that their systems are capable of providing professional mental or behavioral health care, or from using AI systems as a substitute for professional mental health or behavioral health care. Violations are subject to civil penalties of $15,000.
Utah lawmakers tightened regulations for use of AI in mental health care in May, the Washington Post reported. These regulations state that mental health chatbot suppliers may not sell or share users’ health information with third parties; may not use the chatbot to advertise products or services unless it is made clear it’s an advertisement; and make clear to users that the chatbot is an AI technology and not a human, among other stipulations.
In Pennsylvania, a proposed bill would require parents to give consent for their children to receive virtual mental health services provided by a school entity (including behavioral support via AI), according to the Washington Post.
More state bills dictate the use of AI in other areas of health care
Additional state legislative efforts are targeting other areas of AI in health care, including patient communications. They include the following:
In Colorado, an AI statute that takes effect in February 2026 says that “high-risk” AI systems (used to make “consequential decisions” in health care, education, insurance and other areas) must be governed by formal risk management frameworks, according to HR Dive.
Utah Senate Bill 226 tightened rules on how health care entities including clinical labs can use generative AI in patient interactions, Dark Daily reported. As of May 7 this year, labs must disclose AI use when a patient asks if they’re interacting with AI, or the lab uses AI in “high-risk” communications such as delivering test interpretations, diagnostic results, or clinical advice.
California Assembly Bill 3030, which went into effect Jan. 1 this year, mandates transparency when generative AI is used in health care, Dark Daily reported. Any health facility, laboratory, clinic, physician’s office, or group practice that employs generative AI to create patient communications about clinical information must include a disclaimer stating the content was generated by AI, and provide clear instructions telling patients how they can speak directly with a human clinician.
Texas passed a law that, as of Sept. 1, 2025, regulates how AI is used within electronic health records, according to Dark Daily. Providers that use AI for recommendations on diagnosis or treatment based on a patient’s medical record must review all information obtained through AI to ensure accuracy before entering it into a patient’s electronic health record, the newsletter said.
In Pennsylvania, five state representatives plan to introduce legislation to regulate the use of AI in health care, Becker’s Health IT reported. The proposed bill, led by Rep. Arvind Venkat, a physician, would require insurers, hospitals and clinicians to disclose how AI is used in their operations. It also would require that providers of AI services attest to the Pennsylvania Department of Health or Department of Insurance that bias and discrimination have been minimized — and provide evidence to that end.
Resources