When it comes to the EU AI Act, prohibited systems are clear-cut – they’re simply banned. Minimal or limited-risk AI systems, on the other hand, face relatively light transparency requirements. But high-risk AI systems sit in the middle ground that matters most for businesses. They are allowed – but only if you comply with strict, detailed obligations. For many organizations, this is where the real legal and operational challenge lies.
The EU AI Act sets numerous obligations to ensure that AI used in critical contexts – from recruitment and education to biometric identification and credit scoring – does not harm fundamental rights, health, or safety. Because these systems are not banned, they will remain part of business and public life – but only if you comply with rigorous rules. Understanding whether your AI system is high-risk and what this means for your organization, is the first step towards the EU AI Act readiness and responsible AI Governance.
Content
Improving AI literacy across your organization will also be essential to ensure proper understanding and implementation of these obligations. As organizations prepare for compliance, developing a strong internal foundation in AI literacy becomes just as important as legal alignment.
Which AI Systems Fall Under the High-Risk Category?
Under Article 6 of the EU AI Act, high-risk AI systems are those that pose a significant risk to health, safety, or fundamental rights. They are divided into two main groups:
- AI systems integrated into or constituting products regulated under existing EU harmonization legislation
- AI systems used in specific high-risk use cases listed in Annex III of the AI Act.
Determine the risk for each of your AI systems with our AI Act Checker
1. AI systems in products under existing EU legislation
The first category includes AI systems that are either themselves a product or are used as a safety component within a product, and that product is already regulated by specific EU harmonization legislation.
For such an AI system to be considered high-risk, two cumulative conditions must be met:
- The AI system is either a safety component of a product (meaning it serves a safety function of that product) or an AI system that is itself considered a product under EU harmonization legislation.
- The product (whether the AI system itself or the product into which the AI system is integrated) is required to undergo a third-party conformity assessment under that harmonization legislation before it can be placed on the market or put into service in the EU.
The relevant harmonization legislation is listed in Annex I of the AI Act and includes frameworks covering:
- Medical devices (including in vitro diagnostic devices)
- Machinery and industrial equipment
- Toys
- Motor vehicles
- Civil aviation
- Railway systems
- Radio equipment
- Lifts and appliances burning gaseous fuels.
Examples in practice:
i. An AI system controlling braking or steering functions in motor vehicles, where the AI system itself or the vehicle product requires a third-party conformity assessment.
ii. An AI-enabled interactive toy that uses facial recognition or voice analysis as part of its core functionality and requires third-party assessment under the Toys Safety Directive.
2. AI systems used in specific high-risk use cases listed in Annex III of the AI Act
The EU AI Act sets out a detailed list of standalone AI systems considered high-risk due to their intended purpose and potential impact. Key categories include:
a) Biometric identification and categorization of natural persons
This subcategory covers AI systems intended for biometric identification and categorization. It includes remote biometric identification systems, such as facial recognition tools used in public spaces for security or law enforcement purposes, as well as biometric categorization systems that classify individuals based on attributes like age, gender, or ethnicity. It also encompasses emotion recognition systems that infer emotional states from biometric data, for example, in workplaces or educational settings.
b) Management and operation of critical infrastructure
AI systems used to manage or operate critical infrastructure fall within this high-risk category. These include systems applied in the energy or water supply sectors where failures could endanger public health and safety. For example, AI used for load balancing within electricity grids or for optimizing water distribution networks would fall under this subcategory due to the potential severity of operational failures.
c) Education and vocational training
This subcategory includes AI systems that determine educational or vocational outcomes. It covers tools used for student admissions, such as automated entrance exam scoring systems, and AI systems that assess learning progress in high-stakes exams or professional certifications. Given their impact on access to education and careers, these systems are categorized as high-risk.
d) Employment, workers management, and access to self-employment
AI systems used in the employment context are also classified as high-risk. This includes AI tools for recruitment processes, such as automated CV screening or systems that analyze video interviews to assess candidate suitability. It also covers AI used for employee monitoring, performance evaluation, and decisions on promotions or dismissals, given their direct impact on individuals’ employment rights and opportunities.
e) Access to essential private and public services
This subcategory comprises AI systems used to assess eligibility or creditworthiness in contexts such as banking, insurance, or public welfare. For example, AI credit scoring tools used by banks to determine loan approvals, or AI systems that assess applications for public benefits or social services, fall under this category due to their influence on access to essential services.
f) Law enforcement
AI systems supporting law enforcement activities are classified as high-risk, including predictive policing tools that analyze data to identify crime hotspots or potential offenders, as well as AI systems used to assess the reliability of evidence or assist in profiling suspects. Their use directly affects fundamental rights and liberties, justifying strict regulatory oversight.
g) Migration, asylum, and border control management
This subcategory includes AI systems used for migration, asylum, and border control management. For instance, AI tools that verify the authenticity of travel documents, assess security or irregular migration risks, or perform behavioral analysis and lie detection during border checks are all classified as high-risk due to their implications for individuals’ rights to freedom of movement and protection.
h) Administration of justice and democratic processes
Finally, AI systems used in the administration of justice and democratic processes are categorized as high-risk. This includes AI tools that assist judges in interpreting facts, assessing recidivism risks, or applying the law in court decisions, due to their potential influence on judicial outcomes and fundamental rights.
Disclaimer: This Annex III list is not static. The European Commission will periodically update it based on technological developments and emerging risks, meaning organizations must monitor future updates to remain compliant.
For organizations using or developing General Purpose AI (GPAI) models, it is important to note that GPAI integrated into high-risk systems must comply with both sets of obligations. Learn more about General Purpose AI (GPAI) and how it’s regulated under the EU AI Act.
High-Risk AI Systems: When is an AI System Not Considered One?
Although the EU AI Act establishes a broad list of AI systems categorized as high-risk, there are important exceptions. Certain systems listed in Annex III may be exempted from high-risk classification if they do not pose a significant risk to the health, safety, or fundamental rights of natural persons.
An AI system referred to in Annex III will not be considered high-risk if it does not materially influence decision-making outcomes or pose significant risks, and if it meets at least one of the following conditions:
i. The AI system performs a narrow procedural task. This refers to AI used for strictly limited, well-defined functions that do not involve discretionary decisions affecting individuals’ rights or safety.
ii. The AI system is designed solely to improve the result of a previously completed human activity. For example, where the AI checks or verifies a decision already made by a human, without substituting or materially altering it.
iii. The AI system detects decision-making patterns or deviations without replacing or substantially influencing human review. This includes AI used to identify inconsistencies or patterns in prior decisions purely for informational purposes, where final assessments remain fully under human control with proper review processes in place.
iv. The AI system performs exclusively preparatory tasks supporting assessments relevant to Annex III use cases. For instance, data aggregation or formatting tools that prepare information for later human analysis in high-risk contexts, without themselves making or influencing decisions.
Regardless of the above exemptions, AI systems that perform profiling of natural persons (as defined under GDPR) in the contexts listed in Annex III are always classified as high-risk, given their inherent potential to affect fundamental rights.
Even if a provider determines that their AI system referred to in Annex III is not high-risk and therefore the typical high-risk obligations (explained in the next section) do not apply, the system remains subject to certain other obligations, and the provider must:
- Document an assessment explaining why the system does not pose a significant risk or materially influence outcomes, prior to placing it on the market or putting it into service.
- Register the system in accordance with Article 49(2) of the AI Act.
- Provide this documentation upon request to national competent authorities as part of their oversight and enforcement powers.
Obligations for High-Risk AI Systems Under the EU AI Act
High-risk AI systems trigger extensive compliance obligations for both providers (developers placing the system on the market or putting it into service under their name) and non-providers, including deployers, importers, and distributors.
The complexity of these obligations underscores the need for a clear and well-aligned AI policy within your organization. An internal AI policy framework helps coordinate efforts across teams, ensures ongoing compliance, and builds public trust.
Providers of High-Risk AI Systems
Providers carry the primary responsibility for designing, developing, and placing compliant AI systems on the EU market. Their obligations include:
a) Pre-market conformity assessment – Before placing a high-risk AI system on the market or putting it into service, providers must conduct a conformity assessment to demonstrate compliance with all mandatory requirements. Depending on the type of system, this assessment may be internal or require a notified body’s involvement.
b) Risk and quality management systems – Providers must establish and maintain a documented risk management system throughout the AI system’s lifecycle. This includes identifying, evaluating, and mitigating risks, integrated into an overarching quality management framework.
c) Data governance – Training, validation, and testing datasets must be relevant, representative, and free from bias as far as possible. Providers are required to implement robust data governance practices and quality controls to ensure accuracy and fairness.
d) Technical documentation and record-keeping – Comprehensive technical documentation is required, detailing system design, development processes, data sources, performance metrics, and risk assessments. This documentation must be sufficient for national authorities to verify compliance.
e) Retention of automatically generated logs – Providers are required to ensure that logs automatically generated by the AI system are retained for at least six months. Operationally, this means implementing systems for storing and retrieving logs efficiently by date or time, while also ensuring compliance with data protection obligations if logs contain personal data.
f) Transparency and user information – Providers must supply clear and understandable instructions for use, including labelling outputs when relevant (e.g. indicating that results are generated by AI), and provide all necessary information for deployers to understand the system’s capabilities and limitations.
g) Human oversight – High-risk AI systems must be designed with appropriate human oversight measures, such as stop buttons or override mechanisms, allowing users to intervene or deactivate the system when necessary.
h) Accuracy, robustness, and cybersecurity – Systems must be developed to achieve high levels of accuracy and resilience against manipulation or attacks. Providers must implement safeguards against unauthorized access, data corruption, and other cybersecurity threats.
i) Appointment of an authorized representative (for non-EU providers) – Providers not established in the EU must appoint an authorized representative within the Union under a formal mandate. This representative must be empowered to verify the EU declaration of conformity and technical documentation, maintain relevant documents for at least ten years after the AI system is placed on the market, and cooperate with competent authorities upon request. The authorized representative also has the right to terminate the mandate if it believes the provider is failing to meet its AI Act obligations.
j) Registration and post-market monitoring – Before deployment, each high-risk AI system must be registered in the EU’s public database. Providers must also implement post-market monitoring to track real-world performance, report serious incidents or breaches to authorities, and cooperate with market surveillance bodies.
Deployers of High-Risk AI Systems
While providers focus on design and development obligations, non-providers such as deployers, importers, and distributors have operational and compliance responsibilities to ensure safe and lawful use of high-risk AI systems.
a) Use as instructed – Deployers must use the AI system strictly in accordance with the provider’s instructions and organizational guidelines.
b) Assign competent human oversight – It is mandatory to appoint qualified personnel with the necessary training and authority to supervise the AI system and intervene if risks or issues arise.
c) Validate input data – Deployers must ensure that any data fed into the AI system is appropriate, accurate, and suitable for its intended purpose to maintain safe and effective operation.
d) Monitor operation and report issues – There is a duty to continuously monitor the system’s functioning. If hazards, malfunctions, or risks are detected, deployers must immediately cease use and notify both the provider (or distributor/importer) and relevant market surveillance authorities.
e) Retain system logs – Deployers must maintain automatically generated logs under their control for at least six months, or longer if required by EU or national law.
f) Inform employees and representatives – Before deploying high-risk AI systems in the workplace, employers must inform affected employees and their representatives in line with applicable labor law procedures.
g) Register when required – Deployers must verify that the high-risk AI system is registered in the EU database. If not, they must refrain from using it and alert the provider or distributor.
h) Support data protection impact assessments – Deployers must utilize transparency information provided by the AI provider to conduct any necessary Data Protection Impact Assessments under the GDPR.
i) Fundamental Rights Impact Assessment (FRIA) – Deployers of certain high-risk AI systems listed in Annex III must conduct a FRIA to assess and mitigate risks to fundamental rights before first use, but this obligation applies mainly to public bodies, public service providers (e.g. education, healthcare), and financial institutions using AI for credit or insurance decisions. It does not apply to AI systems that are products themselves or safety components covered by EU harmonization legislation, nor to systems used in managing critical infrastructure. The FRIA must cover intended use, affected groups, potential risks, human oversight, and mitigation measures, and deployers must notify the market surveillance authority of its results.
j) Comply with specialized law enforcement rules – Where deployers use AI for post-remote biometric identification in law enforcement, they must secure prior judicial or administrative authorization, log each use, and limit processing strictly to approved purposes.
k) Notify data subjects – When an AI system makes or assists decisions that affect individuals, deployers are required to inform those individuals that a high-risk AI system is in use.
l) Cooperate with authorities – Finally, deployers must provide information to or implement corrective measures required by market surveillance or data protection authorities to ensure continued compliance.
Obligations for Importers and Distributors of High-Risk AI Systems
In addition to providers and deployers, importers and distributors of high-risk AI systems also have specific compliance obligations under the AI Act.
a. Importers must:
- Verify that the provider has conducted the required conformity assessment, prepared technical documentation, and appointed an authorized representative if needed.
- Ensure the AI system bears the CE marking and is accompanied by the EU declaration of conformity and instructions for use.
- Indicate their name, trade name or trademark, and address on the AI system packaging or accompanying documentation.
- Ensure storage and transport conditions do not compromise the system’s compliance.
- Not place noncompliant or falsified AI systems on the market until they are brought into conformity.
- Keep copies of the EU declaration of conformity, notified body certificates, and instructions for use for 10 years after placing the AI system on the market, and ensure technical documentation is available to authorities upon request.
- Inform providers, authorized representatives, and market surveillance authorities of any risks identified with the AI system.
- Cooperate fully with competent authorities, providing all necessary information and documentation upon reasoned request.
b. Distributors must:
- Verify that providers and importers have fulfilled their obligations, including conformity assessments.
- Ensure AI systems have the CE marking and are accompanied by the EU declaration of conformity and instructions for use.
- Ensure storage and transport conditions do not jeopardize compliance.
- Not make available any noncompliant AI systems until they have been brought into conformity.
- If a noncompliant AI system has been placed on the market, take corrective action to bring it into conformity, withdraw, or recall it, or ensure that the provider, importer, or other relevant operator does so.
- Inform providers or importers of any risks identified and immediately notify them and competent authorities if they have made available an AI system that presents a risk, providing details of noncompliance and corrective actions taken.
- Cooperate with authorities by providing all necessary information and documentation regarding compliance actions taken, upon request.
Timeline for High-Risk AI Systems under the AI Act
While August 2027 marks the final compliance deadline for high-risk AI obligations, organizations will face several obligations even before this date and must begin preparations early.
- August 2024 – The AI Act formally entered into force, initiating the countdown to phased compliance deadlines.
- 2 February 2025 – Prohibitions on unacceptable risk AI systems come into effect. These are systems banned outright under the AI Act.
- No later than 2 February 2026 – The European Commission will adopt implementing acts to create a standardized template for high-risk AI providers’ post-market monitoring plans. Additionally, the Commission will issue guidance, including examples to clarify which AI systems are considered high-risk and which are not, to support consistent interpretation and implementation across the EU.
- 2 February 2026 – Obligations for general-purpose AI models come into effect, including foundational transparency and technical documentation requirements. Where general-purpose AI models are integrated into high-risk AI systems, they will need to comply with both general-purpose model requirements and high-risk AI system obligations.
- 2 August 2026 – Compliance obligations begin applying for high-risk AI systems specifically listed in Annex III of the AI Act. Until this date, the Commission will review and, if necessary, amend the Annex III list to reflect technological developments and emerging risks.
- 2 August 2027 – Full obligations for high-risk AI systems become enforceable. This includes:
i. Obligations on Annex I high-risk AI systems, which are AI systems integrated into or constituting products covered by existing EU harmonization legislation and requiring third-party conformity assessment (e.g. radio equipment, in vitro diagnostic medical devices, civil aviation security, railway systems, toys).
ii. Obligations for high-risk AI systems not prescribed in Annex III but intended to be used as safety components or as products themselves covered by harmonization legislation also take full effect.
Additionally, on 6 June 2025, the European Commission launched a public consultation on high-risk AI systems to gather input on implementing the AI Act’s rules. The consultation seeks practical examples and clarifications regarding high-risk AI classifications, related obligations, and responsibilities along the AI value chain. It is open to all stakeholders – including providers, deployers, businesses, public authorities, academia, and civil society – until 18 July 2025 and will inform upcoming Commission guidelines on high-risk AI systems.
Risks and Fines for Non-Compliance
Under the AI Act, non-compliance with obligations for high-risk AI systems carries significant penalties. Entities failing to meet requirements such as data quality, technical documentation, transparency, human oversight, and robustness may face fines of up to €15 million or 3% of their total global annual turnover from the previous fiscal year, whichever is greater. These penalties reflect the EU’s emphasis on ensuring AI systems are safe, trustworthy, and effective.
Additionally, providing false, incomplete, or misleading information to notified bodies or competent authorities can result in fines of up to €7.5 million or 1.5% of total global annual turnover, underscoring the critical importance of transparency and accuracy in all compliance communications.
High-risk AI systems are central to the EU AI Act’s regulatory framework, carrying extensive obligations for providers, deployers, importers, and distributors. While full compliance deadlines extend to August 2027, preparation must begin now – not only from a legal perspective, but also through the development of comprehensive AI governance frameworks.
Organizations must identify whether their AI systems fall into the high-risk category, understand applicable obligations and exemptions, and implement robust compliance processes that align with evolving AI policy standards at both the EU and national levels.
Equally important is promoting AI literacy across teams – especially among decision-makers and operational staff – so that the legal, ethical, and technical implications of high-risk AI deployment are fully understood and responsibly managed. This is critical not only to ensure compliance, but also to foster transparency and public trust in AI-driven products and services.
If you’re unsure how to begin aligning with the EU AI Act or building internal awareness, Whisperly can be a good starting point – for practical guidance on AI governance, compliance, and fostering responsible innovation.