February 16, 2026 12 mins read

The 8 AI Practices the EU Just Banned — And Why Your Company Might Be Breaking the Law Right Now

I want you to imagine something.

You run a mid-size insurance company in Munich. Your team has spent eighteen months building an AI system that analyses customer behaviour patterns to flag potential fraud. It works. It saves money. Everyone is pleased.

Then a compliance consultant walks in, reads the system specification, and tells you that one of the features — the part that analyses facial micro-expressions during video calls to detect deception — is illegal. Has been since February 2025. The penalty? Up to €35 million or 7% of your global annual turnover.

You had no idea. Your engineering team had no idea. The feature was a nice-to-have that someone added in sprint 14 and nobody flagged. And now it’s a prohibited AI practice under Article 5 of the EU AI Act.

This is not a hypothetical. Variants of this conversation are happening across Europe right now. The eight prohibited practices in the AI Act are the absolute red lines — the things you simply cannot do with AI in the European Union, regardless of your intentions, your industry, or how clever the engineering is.

Let me walk you through each one. Some will be obvious. Others might surprise you.

The Eight Prohibited AI Practices

Before we dive in: these have been enforceable since 2 February 2025. Not August 2026 with the high-risk rules. Not “sometime next year.”

Now.

The penalties have been enforceable since August 2025. If you’re doing any of these today, you’re already non-compliant.

#Prohibited PracticeThe Short Version
1Subliminal manipulationAI that distorts behaviour through techniques people can’t detect
2Exploitation of vulnerabilitiesAI that targets people because of their age, disability, or social/economic situation
3Social scoringAI that evaluates people based on social behaviour for detrimental treatment
4Individual criminal risk predictionAI that predicts someone will commit a crime based solely on profiling
5Untargeted facial recognition scrapingBuilding facial recognition databases by scraping images from the internet or CCTV
6Emotion recognition in workplace/educationAI that infers employees’ or students’ emotions
7Biometric categorisation on sensitive characteristicsUsing biometrics to infer race, political opinions, religion, or sexual orientation
8Real-time remote biometric identification for law enforcementLive facial recognition in public spaces by police (with narrow exceptions)

Now let me explain what each one actually means in practice.

1. Subliminal Manipulation

What it says: You cannot deploy AI that uses subliminal techniques — beyond a person’s consciousness — to materially distort their behaviour in a way that causes or is likely to cause them significant harm.

What it means for you: This isn’t about persuasion. Advertising is fine. Recommendation engines are fine. The Commission’s guidelines specifically say personalised advertising is “not inherently manipulative.” What’s banned is AI that deliberately operates below the threshold of human awareness to alter decisions people wouldn’t otherwise make.

The grey zone that catches people: Think about dark patterns at scale, powered by AI. An e-commerce platform that uses AI to dynamically adjust interface elements — hiding cancellation buttons, creating false urgency, manipulating scroll patterns — in ways designed to be undetectable. If the AI is making these choices and the user can’t perceive the manipulation, you’re potentially in prohibited territory.

Industry example: A gaming company uses AI to personalise in-app purchase prompts based on real-time analysis of player frustration levels and spending susceptibility. The player never sees the AI at work — they just feel the urge to buy. If this causes financial harm to vulnerable users, it’s a problem.

2. Exploitation of Vulnerabilities

What it says: You cannot deploy AI that exploits vulnerabilities of specific groups due to their age, disability, or social or economic situation, in a way that materially distorts their behaviour and causes significant harm.

What it means for you: The key word is “exploits.” You can serve AI-powered products to elderly users, children, or economically disadvantaged people. What you cannot do is design AI that specifically targets their vulnerabilities to extract behaviour they wouldn’t otherwise exhibit.

Industry example: A lending platform uses AI to identify users in financial distress — late on rent, multiple declined transactions — and serves them high-interest loan products with AI-optimised copy designed to create urgency. The AI is specifically targeting economic vulnerability to drive decisions the person wouldn’t rationally make. This is prohibited.

Another example: An AI-powered toy that uses conversational techniques to encourage children to share personal information about their family’s habits or purchases. The child’s developmental vulnerability is being exploited by the system’s design.

3. Social Scoring

What it says: You cannot deploy AI that evaluates or classifies people based on their social behaviour or known, inferred, or predicted personal characteristics, where the resulting social score leads to detrimental treatment that is unjustified or disproportionate.

What it means for you: This is the “China scenario” that got a lot of press, but its scope is broader than most people realise. It applies to both public and private actors. It’s not limited to government systems.

The trap: Employee monitoring systems that aggregate behavioural data — attendance patterns, email response times, collaboration metrics, peer feedback — into a single “performance score” that drives employment decisions could fall within this prohibition if the scoring leads to treatment that is unjustified relative to the behaviour being assessed.

Industry example: A property management company uses AI to score prospective tenants based on their social media activity, neighbourhood crime data, and online behaviour patterns. A low score results in automatic rejection. The social behaviour assessment leading to detrimental treatment (denied housing) is exactly what this prohibition targets.

4. Individual Criminal Risk Prediction Based on Profiling

What it says: You cannot deploy AI that assesses the risk of a person committing a criminal offence based solely on profiling or the assessment of their personality traits and characteristics. This doesn’t apply to AI that supports human assessment based on objective, verifiable facts directly linked to criminal activity.

What it means for you: The word “solely” is doing heavy lifting here. If the AI prediction is based entirely on who someone is (demographics, personality, neighbourhood) rather than what they’ve done (evidence of specific criminal activity), it’s prohibited.

Industry example: A retail chain deploys AI surveillance that flags shoppers as “high risk” based on demographic profiling, clothing analysis, and movement patterns — not based on actual suspicious behaviour like concealment of goods. The system is predicting criminal propensity from personal characteristics, not detecting criminal activity.

What’s allowed: AI that analyses CCTV footage to detect actual suspicious behaviour (someone concealing merchandise, attempting to bypass security) is a different system. It’s reacting to observable actions, not predicting crime from personal traits.

5. Untargeted Facial Recognition Scraping

What it says: You cannot create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

What it means for you: You cannot build a facial recognition system by harvesting photos from social media, public websites, or surveillance cameras without specific authorisation. Clearview AI is the poster child for this prohibition — and it’s now explicitly illegal in the EU.

Industry example: A security company scrapes publicly available photos from LinkedIn, Facebook, and Instagram to build a facial recognition database it sells to retailers for loss prevention. Every image collected this way violates Article 5, regardless of how the database is subsequently used.

6. Emotion Recognition in Workplace and Education

What it says: You cannot deploy AI systems that infer the emotions of people in workplace or educational settings. Exceptions exist for medical or safety reasons (e.g., detecting pilot fatigue).

What it means for you: This is the one that catches the most companies off guard. If your call centre software analyses agent tone to assess mood. If your LMS tracks student facial expressions to measure engagement. If your HR platform uses video interview analysis to infer candidate emotional states. All of these are prohibited in their workplace or education context.

Industry example — the insurance case I opened with: That fraud detection system with facial expression analysis during customer video calls? If the customer is an employee being assessed, or a student being evaluated, the emotion recognition component is prohibited. Even in other contexts, it’s a high-risk biometric system that needs careful compliance assessment.

Another example: A corporate training platform uses webcam monitoring to track whether employees are “engaged” during mandatory training sessions, using AI to classify attention levels from facial expressions. This is emotion recognition in the workplace. It’s banned.

What’s allowed: An AI system in an aircraft cockpit that monitors pilot facial cues for signs of fatigue or medical distress — this falls under the safety exception.

7. Biometric Categorisation on Sensitive Characteristics

What it says: You cannot deploy AI that categorises individuals based on their biometric data to infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. Exceptions exist for labelling or filtering lawfully acquired biometric datasets, or in law enforcement for specific purposes.

What it means for you: You cannot use facial features, voice patterns, gait analysis, or other biometric data to sort people into categories based on sensitive personal characteristics.

Industry example: A retail analytics company uses in-store cameras with AI that categorises shoppers by inferred ethnicity and religion to “optimise product placement.” Even if no personally identifiable data is stored, the act of biometric categorisation based on sensitive characteristics is prohibited.

8. Real-Time Remote Biometric Identification for Law Enforcement

What it says: You cannot use “real-time” remote biometric identification systems in publicly accessible spaces for law enforcement purposes. There are narrow exceptions: searching for specific victims (abduction, trafficking, sexual exploitation), preventing a genuine and present terrorist threat, or locating suspects of specific serious crimes.

What it means for you: This primarily affects law enforcement, but it also affects technology providers. If you sell or provide real-time facial recognition technology to law enforcement agencies, you’re part of this value chain. Even where exceptions apply, a fundamental rights impact assessment and judicial or administrative authorisation are required.

Industry example: A city council deploys AI-powered cameras in a public square that continuously scan faces against a database. Unless this falls within the narrow exceptions and has been authorised through the proper legal channels, it’s prohibited.

The Grey Zones: Where Companies Actually Get Caught

The eight prohibitions are clear on paper. In practice, the boundaries are blurrier than most compliance teams are comfortable with. Here’s where I see the most risk:

Grey ZoneWhy It’s Dangerous
Dark patterns + AI personalisationThe line between “effective UX” and “subliminal manipulation” depends on whether the user can perceive the technique. AI makes manipulation invisible at scale.
Employee wellness platformsSystems marketed as “wellness tools” that track engagement, sentiment, or stress through behavioural analysis may constitute emotion recognition in the workplace.
AI-powered recruitmentVideo interview analysis tools that assess “soft skills” often infer emotional states from facial expressions, tone, and body language.
Dynamic pricing targeting vulnerable usersSurge pricing or personalised pricing that systematically exploits users identified as being in economic distress.
Tenant/customer scoringAggregating social and behavioural data into scores that drive access decisions (housing, services, credit) can constitute social scoring.
“Predictive” security systemsRetail or venue security AI that flags people based on appearance rather than behaviour patterns crosses into prohibited territory.

Best Practices: Staying on the Right Side

PracticeWhat To Do
Audit every AI system against Article 5Don’t assume you’re clean. Walk through each prohibition against every AI system in your organisation, including third-party tools and embedded features.
Document the assessmentEven if you conclude a system is not prohibited, document why. Regulators will want to see your reasoning.
Train your product teamsEngineers and product managers need to understand the prohibitions before they design features. Catching a violation in sprint 14 is too late.
Review third-party AI toolsYour vendor’s AI might contain prohibited features you didn’t ask for. If you deploy it, you’re liable.
Create a “red line” checklist for new featuresBefore any AI feature goes to development, screen it against Article 5. Make it a gate in your product process.
Monitor the Commission’s reviewThe February 2026 review under Article 112 may expand the list of prohibited practices. Stay current.
When in doubt, don’t deployThe penalties for prohibited practices are the highest in the entire AI Act. If a system is borderline, the risk-reward calculation is simple: don’t.

How EYREACT Can Help

EYREACT’s Rule Engine includes specific rules and indicators mapped to every prohibited practice under Article 5. The platform automatically flags potential violations across your AI system portfolio, documents your assessment reasoning, and maintains an audit trail proving you’ve done the work.

Because when the regulator asks “did you check?” the answer needs to be documented, timestamped, and evidence-backed. Not “we assumed we were fine.” Book a demo!

FAQ

When did the prohibited practices become enforceable?

2 February 2025. This was the first major compliance deadline under the AI Act. Penalties have been enforceable since 2 August 2025. If you’re operating a prohibited AI system today, you’re already in violation.

What are the penalties?

Up to €35 million or 7% of global annual turnover, whichever is higher. These are the highest penalties in the entire AI Act — significantly more than high-risk non-compliance (€15M/3%) or supplying incorrect information (€7.5M/1%). Italy has also introduced criminal penalties for certain AI-related offences, including imprisonment for unlawful dissemination of deepfakes.

Have there been any enforcement actions yet?

As of March 2026, no public enforcement actions for prohibited practices have been announced. However, several investigations are reportedly underway, particularly around workplace emotion recognition and predictive policing. The enforcement landscape remains fragmented as member states continue designating competent authorities.

Does “emotion recognition” cover sentiment analysis of text?

The prohibition specifically targets emotion recognition in workplace and education settings using biometric data (facial expressions, voice tone, physiological signals). Pure text-based sentiment analysis of written content is generally not considered biometric emotion recognition — but if it’s used in a workplace context to infer individual employee emotions and drive employment decisions, the boundary becomes less clear. Seek specific legal advice.

Our AI personalises ads. Is that prohibited?

The Commission’s guidelines explicitly state that personalised advertising is “not inherently manipulative.” What matters is whether the AI uses subliminal techniques — methods the user cannot perceive — to distort behaviour in a way that causes significant harm. Transparent personalisation based on stated preferences is fine. AI that invisibly manipulates decision-making processes below the user’s awareness threshold is not.

We use AI for fraud detection. Is that affected?

Fraud detection itself is not prohibited. However, specific techniques within a fraud detection system might cross red lines. Emotion recognition (analysing customer facial expressions during interactions), predictive profiling (flagging customers as fraud risks based on demographics rather than behaviour), or social scoring elements (aggregating behavioural data into risk scores that lead to disproportionate treatment) could each trigger different prohibitions. Assess the system component by component.

Do the prohibitions apply to AI systems already deployed before February 2025?

Yes. Unlike some high-risk provisions that include transitional arrangements, the prohibitions apply to all AI systems regardless of when they were placed on the market. There is no grandfather clause for prohibited practices.

Can the list of prohibited practices be expanded?

Yes. Article 112 requires the Commission to review Article 5 periodically. The first mandatory review took place in February 2026 and may result in additional prohibited practices being added based on emerging evidence of harm.

We’re based outside the EU. Do the prohibitions apply to us?

Yes. The AI Act applies to any provider placing an AI system on the EU market or putting it into service in the EU, and to any deployer using an AI system within the EU — regardless of where the provider or deployer is established.

This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.