Understanding the AI Act’s Risk Classification. What Is the EU AI Act’s Approach?
FAQ
Understanding the AI Act’s Risk Classification. What Is the EU AI Act's Approach?
The EU AI Act introduces a nuanced, risk-based approach to regulating AI, establishing a graduated set of requirements based on the level of risk AI systems pose to health, safety, and fundamental rights. This horizontal legislative framework applies across all sectors and industries, focusing on the intended use of AI systems. Departing from the initial binary categorization of low-risk vs. high-risk AI, the Act now employs a four-tier risk classification:
Under this classification, certain AI practices, deemed to pose unacceptable risks, are outright banned as specified in the EU AI Act. These prohibitions include AI applications like social scoring and cognitive behavioral manipulation, which are considered harmful. The Act prescribes severe penalties for engaging in these prohibited activities.
The primary focus of the EU AI Act is on high-risk AI systems (HRAIS), which, while not banned, are subject to strict regulation. The Act defines HRAIS based on two criteria: their integration into sensitive products like medical devices or machinery, and their use in specific applications exhaustively listed in the Act.
HRAIS providers are required to comply with rigorous obligations throughout the AI system’s lifecycle, from development to post-market monitoring. These obligations include conducting conformity assessments, ensuring AI transparency, robustness, and accuracy, and implementing post-market monitoring systems. Since AI-related risks can also arise during the marketing of AI systems, other market operators, such as deployers and distributors of HRAIS, must adhere to certain obligations to mitigate these risks. This comprehensive approach aims to address AI-associated risks that require regulatory oversight.
The Act also extends its regulation beyond AI systems to general-purpose AI (GPAI) models, recognizing their potential to embed risks due to their versatile application across various sectors. The final version of the Act includes a dedicated chapter on GPAI models, imposing specific obligations on their providers to manage associated risks.
A subset of GPAI models, termed GPAI models with systemic risks (GPAISR), are identified based on their high-impact capabilities, either due to technical prowess or market reach. The Act introduces benchmarks and technical tools to identify these high-impact models and imposes additional obligations on GPAISR providers. Additionally, there is another criterion: a decision by the European Commission designating an AI system as GPAISR, which will subject it to specific obligations under the AI Act.
Transparency is key to managing AI-related risks, and the Act mandates transparency obligations tailored to both HRAIS and GPAI models. Additionally, a broader transparency regime applies to AI systems with limited risks, which could include any AI system, even HRAIS or minimal-risk AI systems. For instance, if an AI system interacts with humans, the provider must ensure that users are informed of their interaction with an AI system.
On a global scale, there is widespread support for a risk-based approach to AI regulation. However, concerns have been raised that the Act’s risk classification criteria, particularly for identifying unacceptable risks, lack clarity. For example, the prohibition of AI systems that manipulate individuals through subliminal techniques is conceptually straightforward, but the practical definition of harm and the specific applications subject to prohibition remain ambiguous. Future guidelines and interpretive tools may be necessary to clarify these issues.
The Act grants the European Commission the authority to amend the list of HRAIS in Annex III, but any additions or removals must fall within eight predefined high-risk areas. Expanding these areas to include new high-risk categories would require new legislation from the European Parliament and the Council. Given the rapid development of AI across various sectors, the Act’s adaptability will be crucial in addressing unforeseen risks.
Another point of contention is the level of transparency required for deployers of HRAIS. While the Act emphasizes transparency for developers, there is a growing call for similar obligations to be placed on deployers, who should disclose the AI systems they use, the purposes they serve, and the deployment context. This approach creates potential issues, as it could impose additional obligations on deployers, which might not be appropriate or manageable. It is crucial to strike a balance in responsibilities among different market operators to ensure a functional and competitive AI market in the EU.
One significant gap in the Act is its failure to address the risks associated with interactions between multiple AI systems. AI systems with individually low or minimal risks could interact in ways that create significant risks for individuals and society. These “interactive risks” are currently outside the scope of the Act but may be addressed by the proposed EU AI Liability initiative. The EU AI Act, essentially a product regulation for AI, focuses on a linear risk-based approach without considering the complex interplay of AI systems.
Despite these potential shortcomings, the EU AI Act represents a landmark in AI regulation. Its comprehensive and detailed rules are expected to have a profound impact on the European AI market and potentially beyond. The true test of the Act’s effectiveness will be in its implementation and how well it adapts to the evolving landscape of AI technology.
- Unacceptable risks – leading to prohibited practices;
- High risks – that trigger most of the obligations under the EU AI Act;
- Limited risks – with associated transparency requirements;
- Minimal risks – where stakeholders are encouraged to develop voluntary codes of conduct, regardless of their location within or outside the EU.
Under this classification, certain AI practices, deemed to pose unacceptable risks, are outright banned as specified in the EU AI Act. These prohibitions include AI applications like social scoring and cognitive behavioral manipulation, which are considered harmful. The Act prescribes severe penalties for engaging in these prohibited activities.
The primary focus of the EU AI Act is on high-risk AI systems (HRAIS), which, while not banned, are subject to strict regulation. The Act defines HRAIS based on two criteria: their integration into sensitive products like medical devices or machinery, and their use in specific applications exhaustively listed in the Act.
HRAIS providers are required to comply with rigorous obligations throughout the AI system’s lifecycle, from development to post-market monitoring. These obligations include conducting conformity assessments, ensuring AI transparency, robustness, and accuracy, and implementing post-market monitoring systems. Since AI-related risks can also arise during the marketing of AI systems, other market operators, such as deployers and distributors of HRAIS, must adhere to certain obligations to mitigate these risks. This comprehensive approach aims to address AI-associated risks that require regulatory oversight.
The Act also extends its regulation beyond AI systems to general-purpose AI (GPAI) models, recognizing their potential to embed risks due to their versatile application across various sectors. The final version of the Act includes a dedicated chapter on GPAI models, imposing specific obligations on their providers to manage associated risks.
A subset of GPAI models, termed GPAI models with systemic risks (GPAISR), are identified based on their high-impact capabilities, either due to technical prowess or market reach. The Act introduces benchmarks and technical tools to identify these high-impact models and imposes additional obligations on GPAISR providers. Additionally, there is another criterion: a decision by the European Commission designating an AI system as GPAISR, which will subject it to specific obligations under the AI Act.
Transparency is key to managing AI-related risks, and the Act mandates transparency obligations tailored to both HRAIS and GPAI models. Additionally, a broader transparency regime applies to AI systems with limited risks, which could include any AI system, even HRAIS or minimal-risk AI systems. For instance, if an AI system interacts with humans, the provider must ensure that users are informed of their interaction with an AI system.
On a global scale, there is widespread support for a risk-based approach to AI regulation. However, concerns have been raised that the Act’s risk classification criteria, particularly for identifying unacceptable risks, lack clarity. For example, the prohibition of AI systems that manipulate individuals through subliminal techniques is conceptually straightforward, but the practical definition of harm and the specific applications subject to prohibition remain ambiguous. Future guidelines and interpretive tools may be necessary to clarify these issues.
The Act grants the European Commission the authority to amend the list of HRAIS in Annex III, but any additions or removals must fall within eight predefined high-risk areas. Expanding these areas to include new high-risk categories would require new legislation from the European Parliament and the Council. Given the rapid development of AI across various sectors, the Act’s adaptability will be crucial in addressing unforeseen risks.
Another point of contention is the level of transparency required for deployers of HRAIS. While the Act emphasizes transparency for developers, there is a growing call for similar obligations to be placed on deployers, who should disclose the AI systems they use, the purposes they serve, and the deployment context. This approach creates potential issues, as it could impose additional obligations on deployers, which might not be appropriate or manageable. It is crucial to strike a balance in responsibilities among different market operators to ensure a functional and competitive AI market in the EU.
One significant gap in the Act is its failure to address the risks associated with interactions between multiple AI systems. AI systems with individually low or minimal risks could interact in ways that create significant risks for individuals and society. These “interactive risks” are currently outside the scope of the Act but may be addressed by the proposed EU AI Liability initiative. The EU AI Act, essentially a product regulation for AI, focuses on a linear risk-based approach without considering the complex interplay of AI systems.
Despite these potential shortcomings, the EU AI Act represents a landmark in AI regulation. Its comprehensive and detailed rules are expected to have a profound impact on the European AI market and potentially beyond. The true test of the Act’s effectiveness will be in its implementation and how well it adapts to the evolving landscape of AI technology.