Regulatory Challenges for AI in Medical Devices: Ensuring Safety and Efficacy while Fostering Innovation

AI-enabled medical devices have the potential to enhance diagnosis, treatment, and patient care. However, the rapid advancement of AI in medical devices and healthcare raises important regulatory considerations, with manufacturers and regulators struggling to find a balance between fostering innovation and safeguarding patient welfare. Ensuring the safety, effectiveness, and ethical use of AI in medical devices requires robust regulations. In this article, we will explore the current regulatory landscape and the key guidances and aspects to be considered when developing AI-based medical devices.


 

Presently, the lack of specific legislation or unified standards to govern the utilization of AI in medical devices is being slowly addressed.  At the same time, it is evident that these devices must adhere to the existing regulatory obligations stated under the MDR (Regulation (EU) 2017/745) or IVDR (Regulation (EU) 2017/746) in Europe or by the FDA in the US.

 

In broad terms, software incorporating AI/ML-enabled functions qualifies as medical device software (MDSW) from a regulatory perspective if it fulfills a medical purpose. The development of such MDSW should consider principles such as development life cycle, risk management, information security, cybersecurity, verification, and validation. Manufacturers are obligated to provide a clear intended purpose for the device and demonstrate its benefit and performance by verifying it against specifications and validating it against stakeholder requirements and the intended use. They must also outline the methods used for these validation and verification processes. Safety measures must be implemented to ensure that software development adheres to standards of repeatability, reliability, and performance. The development process must be accurately documented and post-market surveillance activities need to be planned and implemented.

 

The standards commonly applied to the development of MDSW are ISO 13485 for quality management systems, IEC 62304 for medical device software lifecycle processes, IEC 62366-1 for application of usability engineering to medical devices, ISO 14971 for risk management for medical devices, and IEC 82304 for general requirements for product safety with regards to health software. The standard IEC 81001: 2021 additionally deals with safety, effectiveness, and security for health software and health IT systems. 


As the topic of regulating AI in medical devices gains momentum, several specific standards are being developed.


The standard ISO/IEC 42001 which outlines quality management requirements for AI has recently been published. Our blog post on ISO 42001 and AI regulatory compliance discusses how this standard supports AI manufacturers. 

 

Recently, the Association for the Advancement of Medical Instrumentation and the British Standards Institute have published a guide to performing risk management on AI and ML-incorporating medical devices: TIR34971: 2023 describes the application of ISO 14971 to AI and ML and will potentially be developed into an international standard. 


The FDA has implemented similar criteria, particularly within 21 CFR part 820, specifically addressing design controls (part 820.30). Several FDA guidance documents, such as those concerning "software validation", the utilization of off-the-shelf software (OTSS), and cybersecurity, are obligatory resources for those aiming to market AI-enabled medical devices in the USA.

 

The FDA has addressed the topic of AI and ML in Software as a Medical Device through an action plan where it considers five action pillars for ensuring the safety and benefit of AI-enabled medical devices, including their modifications. These pillars are:


 

Currently, a new legislative framework developed by the European Commission has been proposed in order to ensure the responsible and trustworthy development and deployment of AI systems while safeguarding fundamental rights and protecting the well-being of EU citizens. The AI Act aims to regulate the ethical, legal, and technical challenges posed by AI technology in the European Union (EU). The AI Act is expected to enter into force in April 2024, with transitional periods for implementation of either 24 or 36 months.

 

The law assigns applications of AI to four risk categories. First, applications and systems that create an unacceptable risk, such as AI systems that manipulate human behavior, AI systems that exploit vulnerabilities of specific groups (e.g. age, disability), social scoring systems, and AI systems used for indiscriminate surveillance are banned.

 

Second, high-risk applications, specifically AI systems intended to be used as a safety component of a product or as a product, covered by the Union harmonization legislations including the MDR and the IVDR, are subject to specific legal requirements if they undergo conformity assessment procedures with a third-party conformity assessment body under the MDR or the IVDR. 


Limited-risk AI systems, like General Purpose AI (GPAI) are subject to lighter obligations, including the requirement that developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes).

 

Lastly, minimal-risk AI systems, such as AI-enabled video games and spam filters, are largely left unregulated.

 

With respect to the high-risk applications group to which AI-enabled medical devices also belong, the AI Act lists a series of requirements that include:

 

 

For organizations and individuals involved in the development of AI systems, the National Institute of Standards and Technology (NIST) has developed the AI Risk Management Framework (AI RMF) to address the risks associated with artificial intelligence (AI) systems. The AI RMF provides a voluntary, flexible, and non-sector-specific framework for those who aim to promote responsible development and use and increase the trustworthiness of AI. The framework consists of two parts: framing the risks related to AI and analyzing trustworthiness, and specific functions to address AI risks in practice.

 

The first part of the document outlines the challenges of AI risk management, including the measurement of risks, determining risk tolerance, prioritizing risks, and integrating risk management into organizations. The document emphasizes the need for organizational integration and management of AI risks, highlighting the importance of treating AI risks as part of broader enterprise risk management strategies. It also acknowledges the diverse set of actors involved in the AI lifecycle and the importance of their collaboration in managing risks.

Enhancing AI trustworthiness can mitigate potential risks. The Framework highlights key characteristics of trustworthy AI and provides guidance for their implementation. These characteristics include validity, reliability, safety, security, resilience, accountability, transparency, explainability, interpretability, privacy enhancement, and fairness with bias management. Achieving trustworthiness requires finding a balance among these characteristics, considering the specific context of use. While all these attributes are socio-technical in nature, accountability and transparency also pertain to internal AI system processes and external factors. Neglecting these characteristics can heighten the likelihood and impact of adverse consequences.

 

The AI RMF Core described in the second part of the document aims to facilitate dialogue, comprehension, and practices for managing AI risks and to foster trustworthy AI system development. It consists of four primary functions: GOVERN, MAP, MEASURE, and MANAGE. These functions are further divided into categories, subcategories, specific actions, and outcomes to support organizations and individuals establish their AI risk management framework.

 

 

Ensuring safety and efficacy for AI in medical devices is a delicate balance of prioritizing patient safety, promoting transparency, addressing bias, continuous surveillance, and harnessing the technological potential of this innovative technology.

While we highlighted the most relevant guidances and regulations for the development of AI-enabled medical devices, we believe international harmonization of regulations that would facilitate a cohesive global approach and reduce regulatory burden will enable the efficient adoption of AI-driven medical technologies. Internationally recognized agile and adaptive regulatory frameworks that can keep pace with technological advancements would foster an environment where AI in medical devices can thrive while prioritizing patient welfare.

 

If you need support with establishing your regulatory strategy for AI-enabled medical devices, please contact us at: info@quaregia.com and check out our AI presentation page. We can support you by taking a proactive approach to the development and documentation of your medical devices including ML/AI algorithms with gap analyses, technical dossier audits, and custom SOPs tailored to meet the newest requirements and regulations.


Last updated 2024-03-19