ISO 42001 and AI Regulatory Compliance
The regulatory compliance landscape for AI in medical devices has never been more confusing. While there is growing consensus that requirements for AI shall encompass transparency, fairness, accountability, privacy, safeguards for sensitive data, and clear documentation, the number of guidelines and regulations is overwhelming for manufacturers. Choosing a regulatory compliance pathway for AI in medical devices and integrating it into the existing quality management frameworks can become daunting.
In this post, we aim to offer a short overview of how ISO 42001 can be used to establish a compliance framework for AI in healthcare, in agreement with the requirements of the EU AI Act.
At QUAREGIA, we can support you with a tailored quality management system, ensuring strict adherence to both ISO 42001 and the EU AI Act, which will ensure a robust compliance framework for developing AI-containing medical devices.
Schedule your complimentary consultation here
Key Requirements of ISO 42001
The standard delineates crucial prerequisites for fostering responsible AI deployment, and addressing its associated challenges. Some of these requirements encompass those of the EU AI Act and thus the two documents can be addressed simultaneously.
Understanding the Organization and Its Context - the standard mandates the organization define its role relative to AI systems (providers, producers, customers, partners, etc) and conduct a profound assessment of both external and internal factors influencing the organization's mission and its capacity to achieve desired AI management system outcomes.
Leadership and Commitment - the AI management system requirements shall be integrated into the organization’s business and quality processes, which requires dedication and support from top management to foster a culture of continual improvement.
AI Policy - developing a robust AI policy tailored to the organization’s goals thus emerges as a cornerstone of an AI QMS. This policy should provide a framework for delineating clear AI objectives, include commitments to meet applicable requirements, and provisions for ongoing enhancement of the AI management system. Measurable AI objectives, consistent with the AI policy, shall be identified, communicated throughout the organization, monitored, and documented.
Roles, Responsibilities, and Authorities - top management is tasked with ensuring that relevant roles within the organization are assigned and communicated. A responsible person shall be determined for the implementation of the provisions of the standard and the reporting of the performance of the AI management system to top management.
Risk and Opportunity Planning - organizations must proactively identify and mitigate risks while capitalizing on opportunities inherent in AI deployment, thereby facilitating continual improvement.
AI System Impact Assessments - directly linked to requirements for comprehensive risk assessment and risk treatment, ISO 42001 requires organizations to perform a systematic evaluation of the potential ramifications of AI systems on individuals, groups, and societies. This process should be contextualized within technical and societal frameworks, with comprehensively documented findings.
Alongside, the EU AI Act requirements for testing against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system to identify the most appropriate risk management measures shall be taken into account.
Change Management - if the organization determines the need for changes to the AI management system, ISO 42001 mandates that the changes be carried out in a planned manner.
Resource and Competence Allocation - the standard demands the provision of necessary resources and expertise for establishing, implementing, and enhancing the AI management system.
Communication - provisions shall be made on how internal and external communication about the AI management system shall take place.
Particular attention shall be given to transparency and the provision of information to users, as also highlighted by the European AI Act.
Effective Documented Information Management - it is essential to ensure the accessibility, suitability, and security of documented information essential for the AI management system's efficacy, aligning with data retention policies. Document retention policies are also defined in the EU AI Act as 10 years after the AI system has been placed on the market or put into service.
Process Control - processes related to the operation of the AI management system (e.g. AI system development and usage life cycle management) shall be controlled and monitored effectively, ensuring they fulfil their intended purpose.
Structured Approach to AI System Life Cycle Management - the standard requires delineating stringent criteria for each stage of the AI system's life cycle, ensuring responsible design, development, verification and validation, deployment, and operation of the AI system (including event logging when in use), supported by comprehensive technical documentation and specifications.
Maintaining up-to-date technical documentation and record keeping (automatic recording of events/logs) are also key requirements of the EU AI Act.
Data Management - the organization should define, document, and implement data management processes related to the development of AI systems. in particular, attention should be paid to documenting the acquisition, selection, and preparation of data, as well as requirements for data quality, data provenance, and data lifecycle.
Additionally, the provisions of the EU AI Act concerning appropriate data governance practices that take into account the characteristics or elements that are particular to the specific geographical, behavioral, or functional setting within which the high-risk AI system is intended to be used shall be considered.
Measurement, Analysis, and Continuous Improvement - the standard discusses how the performance evaluation of the AI QMS shall be done, including requirements for monitoring, measurement, and analysis of data; internal audits; management reviews; nonconformities and corrective actions.
Additionally, the EU AI Act puts forward requirements for a post-market surveillance system and the reporting of serious incidents to the market surveillance authorities of the Member States where that incident occurred.
ISO 42001 provides in Annex A reference control objectives and controls that provide organizations with a reference framework regarding the development, operation, and risk assessment of AI systems. Annex B additionally provides implementation guidance for the controls listed in Annex A, which can be used as a starting point to develop organization-specific implementation controls.
Furthermore, compliance with the requirements of the EU AI Act regarding human oversight and cybersecurity would round up a comprehensive framework for the responsible development of AI-containing medical devices.
Integrating ISO 42001, the EU AI Act, and ISO 13485 Requirements
The adoption of ISO 42001 significantly influences how organizations handle AI policies and controls, emphasizing ethical AI usage and ensuring control mechanisms are in place to guarantee responsible, secure, and transparent development, deployment, and management of AI systems. It also allows organizations to comply with the QMS requirements of the EU AI Act.
At QUAREGIA, we support organizations in seamlessly integrating ISO 42001 with pre-existing quality management systems in 5 steps:
a gap analysis that reviews existing policies and processes and identifies a strategy for integrating ISO 42001 requirements within the existing systems without disrupting ongoing operations
enhancement of risk management procedures - integrating AI-specific risk and impact assessments into regular activities
upgrading management review procedures and establishing AI objectives to embrace a continuous improvement mindset
ensuring staff and supplier compliance - educating staff and suppliers on the significance of ISO 42001 and their roles in compliance; updating the supplier qualification process to include AI requirements
implementing compliance surveillance through regular process monitoring and auditing of internal and external stakeholders
If your organization does not yet have a quality management system in place, we can provide customer-specific lean QMS solutions based on well-established frameworks to support your compliance requirements and timelines.
Schedule your complimentary consultation here
Finally, QUAREGIA’s cybersecurity experts can support you with implementing a customized cybersecurity process, including SOPs, templates, and training to guarantee the secure development of AI systems per the latest standards. With their guidance, your team can confidently navigate the complexities of cybersecurity, safeguarding your AI projects against potential threats and vulnerabilities.
Conclusion
Like with any new standard or piece of legislation, meeting the requirements of ISO 42001 and the EU AI Act might seem daunting initially. Proactively embracing compliance and integrating these requirements in a stepwise process into pre-existing quality frameworks ensures a smooth organizational adaptation to the newest requisites. As standards and legislation converge and become harmonized, this proactive approach enables flexible adaptation to evolving requirements, ensuring your organization continues to deliver compliant, high-quality AI systems and devices.
Last updated 2024-03-28