Article 9: Risk Management System
Article 9
Risk Management System
Updated on 31 July 2024 based on the version published in the Official Journal of the EU dated 12 July 2024 and entered into force on 1 August 2024.
1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.
2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:
- the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;
- the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose, and under conditions of reasonably foreseeable misuse;
- the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring system referred to in Article 72;
- the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point (a).
3. The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.
4. The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Section, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.
5. The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable.
In identifying the most appropriate risk management measures, the following shall be ensured:
- elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible through adequate design and development of the high-risk AI system;
- where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;
- provision of information required pursuant to Article 13 and, where appropriate, training to deployers.
With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context in which the system is intended to be used.
6. High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in compliance with the requirements set out in this Section.
7. Testing procedures may include testing in real-world conditions in accordance with Article 60.
8. The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.
9. When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on persons under the age of 18 and, as appropriate, other vulnerable groups.
10. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part of, or combined with, the risk management procedures established pursuant to that law.
Table of Contents
Chapter III: High-Risk AI Systems
- Section 1: Classification of AI Systems as High-Risk
- Section 2: Requirements for High-Risk AI Systems
- Article 8: Compliance with the Requirements
- Article 9: Risk Management System
- Article 10: Data and Data Governance
- Article 11: Technical Documentation
- Article 12: Record-Keeping
- Article 13: Transparency and Provision of Information to Deployers
- Article 14: Human Oversight
- Article 15: Accuracy, Robustness and Cybersecurity
- Section 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties
- Article 16: Obligations of Providers of High-Risk AI Systems
- Article 17: Quality Management System
- Article 18: Documentation Keeping
- Article 19: Automatically Generated Logs
- Article 20: Corrective Actions and Duty of Information
- Article 21: Cooperation with Competent Authorities
- Article 22: Authorised Representatives of Providers of High-Risk AI Systems
- Article 23: Obligations of Importers
- Article 23a
- Article 24: Obligations of Distributors
- Article 25: Responsibilities along the AI Value Chain
- Article 26: Obligations of Deployers of High-Risk AI Systems
- Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems
- Section 4: Notifying Authorities and Notified Bodies
- Article 28: Notifying Authorities
- Article 29: Application of a Conformity Assessment Body for Notification
- Article 30: Notification Procedure
- Article 31: Requirements Relating to Notified Bodies
- Article 32: Presumption of Conformity with Requirements Relating to Notified Bodies
- Article 33: Subsidiaries of Notified Bodies and Subcontracting
- Article 34: Operational Obligations of Notified Bodies
- Article 35: Identification Numbers and Lists of Notified Bodies
- Article 36: Changes to Notifications
- Article 37: Challenge to the Competence of Notified Bodies
- Article 38: Coordination of Notified Bodies
- Article 39: Conformity Assessment Bodies of Third Countries
- Section 5: Standards, Conformity Assessment, Certification, Registration
- Article 40: Harmonised Standards and Standardisation Deliverables
- Article 41: Common Specifications
- Article 42: Presumption of Conformity with Certain Requirements
- Article 43: Conformity Assessment
- Article 44: Certificates
- Article 45: Information Obligations of Notified Bodies
- Article 46: Derogation from Conformity Assessment Procedure
- Article 47: EU Declaration of Conformity
- Article 48: CE Marking
- Article 49: Registration
Chapter V: General-Purpose AI Models
Chapter VI: Measures in Support of Innovation
- Article 57: AI Regulatory Sandboxes
- Article 58: Detailed Arrangements for, and Functioning of, AI Regulatory Sandboxes
- Article 59: Further Processing of Personal Data for Developing Certain AI Systems in the Public Interest in the AI Regulatory Sandbox
- Article 60: Testing of High-Risk AI Systems in Real World Conditions Outside AI Regulatory Sandboxes
- Article 61: Informed Consent to Participate in Testing in Real World Conditions Outside AI Regulatory Sandboxes
- Article 62: Measures for Providers and Deployers, in Particular SMEs, including Start-Ups
- Article 63: Derogations for Specific Operators
Chapter VII: Governance
Chapter IX: Post-Market Monitoring, Information Sharing and Market Surveillance
- Section 1: Post-Market Monitoring
- Section 2: Sharing of Information on Serious Incidents
- Section 3: Enforcement
- Article 74: Market Surveillance and Control of AI Systems in the Union Market
- Article 75: Mutual Assistance, Market Surveillance and Control of General-Purpose AI Systems
- Article 76: Supervision of Testing in Real World Conditions by Market Surveillance Authorities
- Article 77: Powers of Authorities Protecting Fundamental Rights
- Article 78: Confidentiality
- Article 80: Procedure for Dealing with AI Systems Classified by the Provider as Non-High-Risk in Application of Annex III
- Article 81: Union Safeguard Procedure
- Article 82: Compliant AI Systems which Present a Risk
- Article 82a
- Article 83: Formal Non-Compliance
- Article 84: Union AI Testing Support Structures
- Article 79: Procedure at National Level for Dealing with AI Systems Presenting a Risk
- Section 4: Remedies
- Section 5: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General-Purpose AI Models
- Article 88: Enforcement of the Obligations of Providers of General-Purpose AI Models
- Article 89: Monitoring Actions
- Article 90: Alerts of Systemic Risks by the Scientific Panel
- Article 91: Power to Request Documentation and Information
- Article 92: Power to Conduct Evaluations
- Article 93: Power to Request Measures
- Article 94: Procedural Rights of Economic Operators of the General-Purpose AI Model
Chapter XIII: Final Provisions
- Article 102: Amendment to Regulation (EC) No 300/2008
- Article 103: Amendment to Regulation (EU) No 167/2013
- Article 104: Amendment to Regulation (EU) No 168/2013
- Article 105: Amendment to Directive 2014/90/EU
- Article 106: Amendment to Directive (EU) 2016/797
- Article 107: Amendment to Regulation (EU) 2018/858
- Article 108: Amendment to Regulation (EU) 2018/1139
- Article 109: Amendment to Regulation (EU) 2019/2144
- Article 110: Amendment to Directive (EU) 2020/1828
- Article 111: AI Systems Already Placed on the Market or Put into Service and General-Purpose AI Models Already Placed on the Marked
- Article 112: Evaluation and Review
- Article 113: Entry into Force and Application