The European Union’s approach to regulating AI in healthcare has evolved steadily over the past decade. In 2017, the EU adopted the Medical Device Regulation (MDR) and the In Vitro Diagnostic Regulation (IVDR), which came into full effect in 2021 and 2022 respectively, raising safety, transparency, and performance standards for medical technologies. As AI integration accelerated, gaps in these frameworks became apparent—particularly around algorithmic transparency and adaptive learning systems. This led to the proposal of the Artificial Intelligence Act (AIA) in 2021, introducing a risk-based approach to AI oversight. After years of negotiation, the AIA was formally adopted in early 2025. Recognizing the need for coherence, the Medical Device Coordination Group (MDCG) and the newly established Artificial Intelligence Board (AIB) issued the joint guidance MDCG 2025-6 in June 2025. This landmark document clarifies the interplay between MDR, IVDR, and the AIA, offering manufacturers, notified bodies, and regulators a harmonized path forward.
In this post, we will discuss the key takeaways from MDCG 2025-6 and their implications for manufacturers of MDAI (Medical Device AI - AI systems used for medical purposes within the scope of the MDR or IVDR).
Medical Device AI (MDAI) is considered high-risk under the AIA if it is itself a medical device (or a safety-critical component of one), AND it requires third-party conformity assessment under MDR/IVDR. Therefore, Class I sterile, measuring, or reusable surgical, IIa, IIb, III (MDR) and Class A sterile, B, C, D (IVDR) devices generally meet this threshold. In contrast, in-house or Class I devices without notified body involvement do not automatically qualify as high-risk AI under the AIA. It is worthwhile to note that the AIA applies to AI-containing MDR Annex XVI devices that require third-party conformity assessment.
While dual compliance is required, integrate documentation is encouraged
In a nutshell, medical devices with AI components must comply with both the Medical Device Regulation (MDR) or In Vitro Diagnostic Regulation (IVDR) and the Artificial Intelligence Act (AIA). These laws apply simultaneously and complement each other. Timewise, all obligations for Annex I high-risk AI systems of the AIA become mandatory on August 2nd, 2027.
Manufacturers are encouraged to integrate AIA requirements into existing MDR/IVDR documentation to avoid duplication. Both MDR/IVDR and the AIA require robust quality management systems (QMS). While MDR/IVDR QMS focuses on device safety and performance, the AIA introduces AI-specific elements—like algorithmic transparency, bias detection, cybersecurity, and data governance. Manufacturers can combine both systems, but must meet the individual expectations of each regulation.
For high-risk MDAI, conformity assessment follows MDR/IVDR pathways, with additional scrutiny from the AIA. The AIA doesn’t duplicate the assessment—it builds on it. Technical documentation must now include AI-specific validation, data governance protocols, risk mitigation strategies, and details of AI capabilities.
Risk management gets more tricky
Manufacturers of high-risk MDAI are expected to integrate additional risk management requirements specific to the AIA into their existing documentation and procedures under the MDR and IVDR.
The AIA stipulates that high-risk AI systems must be tested throughout development—ideally including real-world conditions—to ensure they consistently meet performance and safety requirements using predefined metrics and thresholds. Additionally, providers must supply clear information and, where necessary, tailored training to deployers based on their expertise and intended use context, while also considering potential risks to vulnerable groups like minors.
Human oversight mechanisms proportional to the risks, level of autonomy, and context of use of the high-risk MDAI that allow appropriate supervision by healthcare professionals and institutions should be implemented as part of the risk mitigation strategy. This should be aligned with the intended purpose of the high-risk MDAI system and be illustrative of conditions of reasonable foreseeable misuse.
Transparency, human oversight, and user training are not optional
Both the AI Act (AIA) and the MDR/IVDR place strong emphasis on transparency and human oversight as core design and safety principles for AI-enabled medical devices (MDAI). Under the AIA, high-risk MDAI must be designed to allow for meaningful human oversight—this means integrating clear interfaces, alerts, and mechanisms like emergency stop functions to ensure users can intervene when necessary. The MDR and IVDR reinforce this requirement through usability engineering and risk management measures tailored to the device’s classification and complexity, as also outlined in the previous section.
Transparency is equally critical. The AIA requires manufacturers to ensure that MDAI outputs are understandable, supported by clear instructions for use, and provide insight into how AI contributes to decisions. Similarly, MDR/IVDR demand clear labeling, performance disclosures, and documentation that allows traceability and informed use. Together, these regulations ensure that users and deployers—particularly in high-stakes healthcare environments—can confidently understand, trust, and safely operate AI-powered medical technologies.
Furthermore, the requirements on human oversight and transparency reinforce the need to provide deployers and affected persons with sufficient information to understand the system’s capabilities, limitations, and potential risks. Manufacturers must ensure that deployers (e.g., clinicians) receive adequate training to understand and correctly use MDAI. The AIA introduces obligations around AI literacy, ensuring users grasp the system’s strengths, limitations, and risks. This supports safe deployment and minimizes misuse.
Cybersecurity is a shared priority
Both MDR/IVDR and AIA require manufacturers to anticipate and mitigate cyber threats, ensure data protection, and maintain AI system integrity. The AIA further emphasizes protecting training data, AI models, and the digital infrastructure against manipulation or unauthorized access.
Traceability gets an added dimension
While the MDR and the IVDR require that devices are traceable throughout the supply chain and device lifecycle (traceability of device movement), article 12 of the AIA introduces requirements related to functional traceability. Accordingly, AI systems are required to maintain logs of system performance and behaviour throughout their lifecycle to support monitoring and post-market surveillance, effectively introducing the requirements of traceability of system functioning and performance.
Clinical and performance evaluation remain essential
The requirements of the three regulations converge in requiring clinical/ performance validation to demonstrate that MDAI systems are safe and provide accurate, reliable, and clinically relevant outputs in the intended patient population.
High-risk MDAI must undergo verification and validation to verify robustness, accuracy, and respect for fundamental rights and to demonstrate they operate as intended and meet safety and performance requirements.
High-risk MDAI can undergo clinical investigations or performance studies prior to their placing on the market or putting into service; this constitutes real-world testing under the AIA and it is regulated in Article 60(1) of the AIA.
Data governance and bias mitigation
Under Article 10 of the AIA, manufacturers must ensure training, validation, and testing data sets are high quality, relevant, representative, and free from bias. This complements MDR/IVDR's demand for clinical and performance data that reflects intended use. MDAI systems must be designed to detect and mitigate harmful biases, protecting both safety and fundamental rights.
Data collection protocols shall aim to ensure that the relevant characteristics of the intended patient population, intended use environment, and measurement inputs are sufficiently represented in a sample of adequate size in the datasets for training, validation, testing, and monitoring so that results can be reasonably generalized to the targeted population.
In addition, the validation of training data used by MDAI should be demonstrated as part of the studies to ensure the accuracy, reliability, and effectiveness of the MDAI.
Post-market monitoring must catch drift and interaction
AI systems can evolve post-deployment, making continuous performance monitoring vital. Manufacturers must detect model drift, emerging safety risks, or unintended interactions with other systems. Therefore, one of the essential requirements of the AIA is that all high-risk MDAI must have technical capabilities for the automatic recording of events (logs) over the lifetime of the MDAI.
Additionally, the AIA requires post-market monitoring plans, which may be integrated into MDR/IVDR procedures. A template for such a plan will be made available in early 2026.
Requirements for substantial modifications will follow
The AIA act introduces in Article 43(4) the concept of substantial modification, which shall trigger a new conformity assessment procedure of the high-risk MDAI. Guidelines on the practical implementation of these provisions are pending.
Changes to high-risk MDAIs that have been pre-determined by the manufacturer and assessed at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in AIA Annex IV point 2(f) shall therefore not constitute a substantial modification. A guidance on pre-determined change control plans for MDAI is currently under development.
Final Thoughts
The MDCG 2025-6 guidance marks a pivotal moment for manufacturers, regulators, and healthcare institutions. It lays out a blueprint for ensuring that AI-enabled medical devices meet both technological excellence and societal trust standards. While the road to compliance may be complex, the reward is a safer, more transparent, and ethically robust healthcare AI ecosystem.
If you need support with the integration of the AIA requirements into pre-existing quality and regulatory frameworks, please contact us at info@quaregia.com or book a free 30-minutes consultation here.
Last updated 2025-08-01