International_handshake_260413_iStock.jpg

(Source: iStock)

The International Medical Device Regulators Forum (IMDRF) has released a draft technical document to create a harmonized set of best practices in mitigating risks associated with the use of artificial intelligence (AI)-enabled medical devices throughout their product life cycle.

The guidance states that “the GMLP principles describe foundational best practices for the development of AI-enabled medical devices, emphasizing areas such as data quality, model transparency, performance evaluation, and the role of multidisciplinary expertise. These principles underpin each step of the AI life cycle, and this document provides relevant GMLP references to help provide a foundational understanding of applicable principles.”

The document builds on previous guidance regarding Good Machine Learning Practice (GMLP) issued in 2025. (RELATED: IMDRF finalizes good machine learning practices, software risk documents, Regulatory Focus 29 January 2025)

It outlines several “universal concepts” applicable to the AI life cycle including the implementation of a Quality Management System (QMS), risk management strategies, human oversight, and cybersecurity.

Last month at the IMDRF forum in Singapore, IMDRF announced that its management committee agreed to publish the draft document.

In terms of QMS, the guidance states that “AI-enabled medical devices benefit from implementation of scalable life cycle support processes that emphasize safety-focused risk management throughout all life cycle steps. For example, QMS requirements management captures functional specifications as well as clinical environment considerations, such as how AI outputs will be interpreted by healthcare providers or patients.”

The guidance states that the risks related to AI-embedded medical devices include “black box” nature of some AI models, which makes it challenging to understand how and why certain outputs are produced or why certain decisions are made by the model.

The guidance suggests that manufacturers follow these standards to reduce risks:

The American National Standards Institute (ANSI) and the Association for the Advancement of Medical Instrumentation (AAMI and the International Organization for Standardization’s (ISO) 14971:2019 standard on the applications of risk management to medical devices;
the ISO/Technical Report (TR) 24971:2020 Medical devices—Guidance on the application of ISO 14971; and
the AAMI Technical Information Report (TIR) 34971:2023 on the application of ISO 14971 to machine learning in AI.

The guidance emphasizes that human oversight—particularly from clinicians, healthcare providers, patients, and lay users—is crucial throughout the entire life cycle of AI-enabled medical devices. This oversight ensures that human and clinical expertise informs model development, validates real-world performance, and supports effective human-AI collaboration, all while prioritizing patient safety and enhancing clinical decision-making.

The document includes three appendices: one detailing traceability between GMLP and Life Cycle Steps, another providing examples of common evaluation metrics, and a third outlining labeling elements.

The guidance states that “by adopting the considerations and principles outlined in this document, stakeholders can contribute to the development of AI-enabled medical devices that are trustworthy, transparent, and aligned with the needs of patients, healthcare providers, and requirements of regulatory authorities. This collaborative approach will help ensure that AI-enabled medical devices continue to advance healthcare while maintaining the highest standards of safety and effectiveness.”

The guidance is open for consultation until 10 June.

IMDRF consultation