EU AI Act Prohibited Practices: What’s Banned and Why It Matters

June 5, 2025
EU AI Act Prohibited Practices

The first provisions of the EU AI Act are already in effect, and it is no accident that the rules around forbidden AI practices took priority. You can read more about the broader framework in our EU AI Act Summary for full context on how these bans fit into the regulation as a whole.

Article 5 of the EU AI Act, which defines what AI uses are strictly prohibited, entered into force on February 5, 2025, significantly before the rest of the AI Act provisions. By bringing these EU AI Act prohibited practices into force first, policymakers made an effort to curb the most dangerous and unethical applications of AI from the very beginning.

As that date is already behind us, organizations and developers should be fully aware of which AI behaviors are no longer allowed, whether due to how they function or what they are used for. For those managing compliance programs, strong AI Governance is essential to identify and prevent these high-risk uses. In this text, we break down the main categories of EU AI Act prohibited practices that are explicitly outlawed under the EU AI Act.

What Are the EU AI Act Prohibited Practices?

This breakdown builds on the foundational overview provided in our EU AI Act Summary and highlights what is strictly forbidden under Article 5.

1. Altering an Individual’s Actions by Manipulating Their Vulnerabilities

Subjects affected by the AI Act must not develop, offer, or use AI systems that exploit a person’s specific vulnerability to influence their behavior in a way that could harm them or someone else. This falls under one of the core EU AI Act prohibited practices outlined in Article 5.

 

Such vulnerabilities include:

  • belonging to a particular age group,
  • having a disability,
  • or being in a distinct social or economic situation.

 

A real-world example of this prohibited practice under the EU AI Act is any marketing strategy that intentionally singles out vulnerable individuals to direct them toward specific actions. For instance, consider an AI-driven advertising tool that monitors online activity and detects users whose web searches often include “affordable mobility aids” and “home care tips”, indicating that such users might be older adults with mobility challenges. In cases where the AI starts targeting these users with ads or overpriced, unverified health supplements or home devices that offer little real benefit, such a practice would violate the EU AI Act, as it exploits an individual’s physical or psychological vulnerabilities.

2. Deceptive or Manipulative Techniques Beyond the User’s Awareness

The AI Act includes a prohibition aimed at preventing negative impacts on an individual through deceptive, manipulative, or other subliminal techniques implemented in an AI system.

The systems described include methods that would lead a person towards making a decision they would otherwise not make, leading to actual or potential harm. Such systems must not be used, launched on the market, or put into service.

The reason why these methods are prohibited lies in the fact that they undermine an individual’s ability to make a fully informed choice on a particular topic. In other words, these prohibited activities manipulate a person’s consciousness and provoke decisions or behaviors that rarely reflect their free will.

For example, imagine a travel booking website using an AI system to monitor a user’s browsing behavior. Based on the information collected, the system can detect when someone is urgently searching for last-minute flights. At that critical moment, the system might show a pop-up window, creating a false sense of urgency by displaying a message: “Hurry – only 2 seats left at this price!”.

Even though it may not be true that there are only two tickets left, the described tactic pressures the traveler into booking a ticket that might not be needed or affordable.

3. Assigning Social Scores Leading to Negative Outcomes

Further, using AI to classify individuals based on their actions or traits in a way that may assign them a negative “social score” is not permitted. Negative social scoring can, in practice, lead to at least one, if not both of the following outcomes:

  • Treating an individual or a group unfavorably in social contexts unrelated to the original reason their data was collected
  • Subjecting an individual or a group to unjust or disproportionately negative treatment.

Why does AI Act explicitly prohibit the described activities? Simply because the AI-driven scoring can easily undermine fundamental rights and principles such as fairness, equality, and dignity.

Imagine an AI system used by a rental platform for the analysis of the users’ online reviews and social media activity. If the AI assigns a “low trust” score to a potential lessee based on past comments or minor complaints, and as a result, refuses their rental application or demands a significantly higher deposit, this would be discriminatory as it would unfairly penalize individuals by violating their basic rights.

4. Altering an Individual’s Actions by Manipulating Their Vulnerabilities

Subjects affected by the AI Act must not develop, offer, or use AI systems that exploit a person’s specific vulnerability to influence their behavior in a way that could harm them or someone else. Such vulnerabilities include:

  • belonging to a particular age group,
  • having a disability,
  • or being in a distinct social or economic situation.

A real-world example of this prohibition is any marketing strategy that intentionally singles out vulnerable individuals to direct them toward specific actions.

For instance, consider an AI-driven advertising tool that monitors online activity and detects users whose web searches often include “affordable mobility aids” and “home care tips”, indicating that such users might be older adults with mobility challenges.

In cases where the AI starts targeting these users with ads for overpriced, unverified health supplements or home devices that offer little real benefit, such a practice would violate the EU AI Act, as it exploits an individual’s physical or psychological vulnerabilities.

5. Predicting Criminal Behavior Through Profiling

According to the EU AI Act, AI systems must not be used to foresee the likelihood of an individual committing a crime based solely on their social interactions or personality traits. Precisely, it is prohibited to deploy machine learning models for analyzing someone’s character based on their communication style, psychological profile, or other personal characteristics, and then flag them as a potential future offender without directly connecting them to factual criminal evidence.

Such a prohibited system, for example, could use the results of an individual’s personality test or scan someone’s social media profile and, based on that fact, conclude that a particular person may potentially become a criminal. As those conclusions would be based on subjective criteria rather than proven objective facts, they may easily lead to errors, bias, and severe privacy violations.

However, this ban does not apply to AI tools that only assist human investigators and use objective, provable information. When an AI system supports a law enforcement officer by highlighting documented, evidence-based leads, rather than speculating on the criminality risk associated with an individual’s personality, it is allowed under the EU AI Act.

With everything said in mind, it can be concluded that the use of AI in enforcement can be properly directed and limited, instead of being entirely prohibited.

6. Inferring Employees’ or Students’ Emotions

Imagine an AI application that records team members during online meetings to evaluate their emotional states – such a tool would fall under the next category of prohibited practices under the AI Act. Any AI system aimed at assessing the emotions of employees or students is forbidden under the AI Act. This category specifically includes technologies that analyze emotions (through voice patterns, facial expressions, or otherwise) and make conclusions based on that data.

The reason why this practice is banned is that emotion-detection tools in the work or educational environment can introduce bias, lead to unfair treatment, and violate fundamental rights. In these two settings, individuals are usually in a subordinate position compared to the opposite side (employers or educational institutions), which makes these systems prone to fostering discrimination.

That said, not all emotion-detection AI is prohibited – the AI Act allows systems used for medical or safety purposes. In those contexts, such tools can be beneficial and are therefore permitted, provided they operate outside of “high-risk” scenarios such as employment or schooling.

7. Untargeted Data Scraping for Facial Recognition

The EU AI Act also prohibits the use of AI to collect considerable amounts of images or recordings of individuals for building facial recognition databases. In other words, training a facial recognition system on the data that was gathered by untargeted scraping and that was initially collected for other purposes is strictly forbidden.

Data scraping has been a legally controversial topic for some time now, and those debates are only further deepened by the AI Act. The ban applies to all sources of visual data, regardless of whether the data is widely available online or captured via CCTV cameras. Not only that, but it applies even to the recordings that were made for legitimate security or monitoring purposes, such as private property surveillance – the collected data cannot be repurposed without the data subject’s consent for its data to be used to feed a facial recognition database.

The main justification behind this restriction is to restrain unauthorized processing of personal data and support individuals’ privacy rights. By aligning with GDPR principles, the AI Act ensures that any collection of biometric data remains lawful, proportionate, and transparent, regardless of the circumstances.

8. Biometric Categorization Systems Used Discriminatively

Another item on the list of banned AI practices is the use of biometric data to classify individuals on unlawful or discriminatory grounds.

Specifically, AI systems may not sort or label people based on characteristics such as race, political opinions, trade union membership, religious beliefs, philosophical beliefs, sex life, or sexual orientation. For example, if a retail store installs cameras equipped with AI to analyze customers’ facial features and estimate their race, and the system then uses such inferred racial information to show certain racial groups higher prices or different product recommendations, such a system would violate the AI Act’s prohibition.

On the other hand, legitimate biometric applications, including those used by law enforcement for identifying suspects or carrying out judicially authorized actions, remain permitted under the EU AI Act, as using biometric data for legally supervised purposes (for instance, locating an offender) is considered allowed.

9. Real-Time Remote Biometric Identification Systems for Law Enforcement

Not all law enforcement uses of AI are permitted under the Act. Specifically, real-time remote biometric identification systems are banned because they pose serious risks to individual rights and privacy. This form of biometric surveillance is one of the most debated EU AI Act prohibited practices.

For example, deploying facial recognition at large gatherings, such as concerts or mass protests, to identify attendees and then cross-check them against criminal databases is not allowed.

A narrow list of circumstances may justify such systems:

  1. Detecting human trafficking or abduction victims
  2. Preventing a highly credible threat to public safety, such as an imminent terrorist attack
  3. Locating a person suspected of a serious crime punishable by at least four years imprisonment

Even when one of these exceptions applies, strict additional conditions must still be met before the AI-based system can be used to confirm a person’s identity:

  • Authorities must assess the severity, likelihood, and scope of harm that could arise if the system is not used.
  • A fundamental-rights impact assessment (FRIA) must be conducted to understand the potential consequences of deploying the system.
  • The system must be registered in the EU’s database of high-risk AI systems.
  • The system must be approved on the national level – the authority responsible in the EU member state must be persuaded that using the system is both necessary and proportionate to a legitimately justified goal, based on objective evidence.
  • Both the national market-surveillance authority and data-protection authority must be notified, and such authorities have the obligation to send annual reports to the European Commission detailing the system’s use.

If an EU Member State chooses to permit real-time biometric identification in public spaces more broadly, it may do so through national legislation, provided that the national law fully complies with the AI Act’s restrictions.

As the EU AI Act prohibited practices have finally taken effect, organizations must remain alert in identifying and eliminating banned AI practices. By clearly defining and enforcing these EU AI Act prohibited practices, the EU AI Act seeks to safeguard fundamental rights and promote responsible innovation.

Over time, we will learn whether these rules strike the right balance between protection and progress. In the meantime, businesses should keep auditing their AI systems and stay informed on the evolving guidance in order to ensure compliance and foster a trustworthy AI ecosystem. To do so effectively, a structured AI Governance approach is key to ongoing risk management and regulatory alignment.

At Whisperly, we help organizations identify and eliminate prohibited AI practices, ensuring full compliance with the EU AI Act while fostering ethical and responsible innovation.

How Can Whisperly Help?

Share to social media: