AMA News Wire
How to develop AI policies that work for your organization’s needs
Aug 18, 2025
Physicians are excited about augmented intelligence (AI)—commonly called artificial intelligence—and its uses are expanding daily.
With the ever-growing number of applications for AI in the health care setting and excitement in adopting these technologies, it is important for health care organizations to have policy in place to manage the rapidly changing landscape.
“Technology is moving very, very quickly. It’s moving much faster than we’re able to actually implement these tools, so setting up an appropriate governance structure now is more important than it’s ever been because we’ve never seen such quick rates of adoption,” said Margaret Lozovatsky, MD, who is chief medical information officer and vice president of digital health innovations at the AMA.
Nearly 70% of physicians the AMA surveyed said they used AI tools in 2024, up from 38% just a year earlier. And enthusiasm for the technology that can help with administrative and clinical responsibilities, was up, too. The survey showed that 35% of physicians queried reporting that their enthusiasm for health AI exceeded their concerns. That’s up from the 30% of physicians who felt that way just a year earlier.
The AMA STEPS Forward® “Governance for Augmented Intelligence” toolkit, developed in collaboration with Manatt Health, is a comprehensive eight-step guide for health care systems to establish a governance framework to implement, manage and scale AI solutions. It includes a step dedicated to help organizations develop AI policies.
The foundational pillars of responsible AI adoption are:
Establishing executive accountability and structure.Forming a working group to detail priorities, processes and policies.Assessing current policies.Developing AI policies.Defining project intake, vendor evaluation and assessment processes.Updating standard planning and implementation processes.Establishing an oversight and monitoring process.Supporting AI organizational readiness.
From AI implementation to EHR adoption and usability, the AMA is fighting to make technology work for physicians, ensuring that it is an asset to doctors—not a burden.
The AMA defines AI as augmented intelligence to emphasize that AI’s role is to help health care professionals, not replace them.
What to include in an AI policy
What to include in an AI policy
The AMA toolkit includes a model AI policy document that health care organizations can download and modify to align with their existing governance structure, roles, responsibilities and processes. Health systems should also be aware of state and federal laws that could impact AI’s use.
At minimum, an organization’s AI policy should articulate:
Definitions for relevant terms such as generative AI and machine learning.AI risks, including the risks associated with a lack of transparency, patient safety and data privacy and security.Permitted uses of approved and publicly available AI tools. This includes describing permitted-use cases, such as using AI for developing drafts of marking materials and research summaries.Prohibited AI uses, such as entering patients’ personal health information into publicly available AI tools.Permitted uses of approved AI tools, such as requirements and guidelines that all team members should follow when using AI tools.Governance, evaluation and approval processes for AI tools.Description of AI accountability and oversight, including risk assessment and regulatory compliance.Policy for how long AI generated information and a patient visit recordings will be retained.Transparency, including guidelines on when and how clinicians and patients should be made aware that AI is being used.Training, for example, incorporating AI training in the annual and ad hoc training program for everyone in the organization who uses AI. Training should include relevant AI policies such as what AI use is permitted and what is not. The AMA Ed Hub™ routinely published educational content that may be helpful for organizations.
Stay up to date on AI
Follow the latest news on AI and its applications and effects for health care—delivered to your inbox.
Also evaluate existing policy
Also evaluate existing policy
After AI policy is developed, it is important to review existing policies and procedures and make any necessary revisions or cross-references to the new policy. Some examples of policies that organizations should consider reviewing are:
Antidiscrimination.Code of conduct and code of ethics.Contracting and signatory authority.Data and security.Data use.Informed consent.Inventory management.Patient safety reporting.Training.Vendor contracting.
Find out how participants in the AMA Health System Member Program are using AI to make meaningful change. You also can watch a recent AMA webinar exploring how to establish an AI governance framework
In addition to fighting on the legislative front to help ensure that technology is an asset to physicians and not a burden, the AMA has developed advocacy principles (PDF) that address the development, deployment and use of health care AI, with particular emphasis on:
Health care AI oversight.When and what to disclose to advance AI transparency.Generative AI policies and governance.Physician liability for use of AI-enabled technologies.AI data privacy and cybersecurity.Payer use of AI and automated decision-making systems.
Learn more with the AMA about the emerging landscape of health care AI. Also, explore how to apply AI to transform health care with the “AMA ChangeMedEd® Artificial Intelligence in Health Care Series.”
Making technology work for physicians
CME: Improving patient-clinician telehealth communication
Podcast: The future of telehealth services
Video: Uses of artificial intelligence in mental health
Webinar: State models of licensure flexibility