January 28, 2026 17 mins read

EU AI Act Article 5: The Complete Guide to Prohibited AI Practices

I want you to imagine something.

You run a mid-size insurance company in Munich. Your team has spent eighteen months building an AI system that analyses customer behaviour patterns to flag potential fraud. It works. It saves money. Everyone is pleased.

Then a compliance consultant walks in, reads the system specification, and tells you that one of the features — the part that analyses facial micro-expressions during video calls to detect deception — is illegal. Has been since February 2025. The penalty? Up to €35 million or 7% of your global annual turnover.

You had no idea. Your engineering team had no idea. The feature was a nice-to-have that someone added in sprint 14 and nobody flagged. And now it’s a prohibited AI practice under Article 5 of the EU AI Act.

This is not a hypothetical. Variants of this conversation are happening across Europe right now. Article 5 contains the absolute red lines — the eight things you simply cannot do with AI in the European Union, regardless of your intentions, your industry, or how clever the engineering is.

Let me walk you through the article provision by provision. Some will be obvious. Others might surprise you.

EU AI Act Article 5 at a Glance: What It Is and Why It Exists

Article 5 of Regulation (EU) 2024/1689 — the EU AI Act — lists AI practices that are completely prohibited within the European Union. These are the practices the EU considers so fundamentally incompatible with European values, fundamental rights, and democratic principles that no level of regulation can make them safe.

Important: These are not “high-risk with extra safeguards.” These are banned practices.

Key FactDetail
Legal referenceArticle 5, Regulation (EU) 2024/1689
What it coversEight categories of prohibited AI practices
Enforceable since2 February 2025
Penalties enforceable since2 August 2025
Maximum penalty€35 million or 7% of global annual turnover (whichever is higher)
Applies toAny provider placing AI on the EU market, any deployer using AI in the EU — regardless of location
Grandfather clauseNone — applies to all AI systems regardless of when deployed
Commission guidelinesPublished 4 February 2025, clarifying scope, conditions, and examples
Periodic reviewArticle 112 requires regular review — first review February 2026, may expand the list
EnforcementNational market surveillance authorities + national data protection authorities (for biometric provisions)

Key Definitions You Need for EU AI Act Article 5

Before diving into the eight prohibitions, these definitions from Article 3 determine whether Article 5 applies to you:

TermDefinitionWhy It Matters for Article 5
AI systemA machine-based system that infers from input how to generate outputs (predictions, content, recommendations, decisions) that can influence environmentsIf your software infers and influences, it’s an AI system — and Article 5 applies
Placing on the marketMaking an AI system available for the first time on the EU market, whether for payment or freeFree tools are covered. Internal tools you make available to EU users are covered.
Putting into serviceFirst use of an AI system for its intended purpose, directly to a deployer or for own use in the EUInternal deployment counts. You don’t need to sell the system for Article 5 to apply.
ProviderEntity that develops an AI system (or has one developed) and places it on the market under its own nameIf you build it and brand it, you’re the provider — and Article 5 applies to you
DeployerEntity that uses an AI system under its authority in a professional capacityIf you use someone else’s AI in your business, you’re the deployer — and Article 5 still applies to you
Subliminal techniqueA technique deployed without a person’s awareness that materially distorts their behaviourThe person can’t detect it. That’s the key test.
Biometric dataPersonal data resulting from specific technical processing of physical, physiological, or behavioural characteristics (facial images, fingerprints, voice patterns, gait)Broadly defined. If it comes from someone’s body or behaviour and is processed technically, it’s biometric.
Emotion recognitionAn AI system that identifies or infers emotions or intentions from biometric dataInferring anger, stress, engagement, truthfulness from facial expressions, voice, or body language
Remote biometric identificationIdentifying a natural person at a distance by comparing biometric data against a reference database, without the person’s prior knowledgeOne-to-many matching: “who is this person?” not “is this person who they claim to be?”
Real-timeIdentification occurring without significant delay — capture, comparison, and identification in near-instantaneous or continuous modeLive surveillance, not post-event review

Not sure where your AI product fall? Get your personalised Compliance Readiness Roadmap.

8 Prohibited Practices: Article 5(1)(a) through 5(1)(h)

#ArticleProhibited PracticeThe Short Version
15(1)(a)Subliminal manipulationAI that distorts behaviour through techniques people can’t detect
25(1)(b)Exploitation of vulnerabilitiesAI that targets people because of their age, disability, or social/economic situation
35(1)(c)Social scoringAI that evaluates people based on social behaviour for detrimental treatment
45(1)(d)Individual criminal risk predictionAI that predicts someone will commit a crime based solely on profiling
55(1)(e)Untargeted facial recognition scrapingBuilding facial recognition databases by scraping images from the internet or CCTV
65(1)(f)Emotion recognition in workplace/educationAI that infers employees’ or students’ emotions
75(1)(g)Biometric categorisation on sensitive characteristicsUsing biometrics to infer race, political opinions, religion, or sexual orientation
85(1)(h)Real-time remote biometric identification for law enforcementLive facial recognition in public spaces by police (with narrow exceptions)

Provision-by-Provision Breakdown of EU AI Act Article 5

Article 5(1)(a) — Subliminal Manipulation

The legal test (cumulative conditions):

  1. An AI system is deployed
  2. It uses subliminal techniques beyond a person’s consciousness, OR purposely manipulative or deceptive techniques
  3. The technique materially distorts the person’s behaviour
  4. The distortion causes or is reasonably likely to cause significant harm

What the Commission guidelines clarify: Personalised advertising is “not inherently manipulative” — it must deploy techniques the person cannot perceive. Lawful persuasion that operates transparently and facilitates informed consent is fine. The prohibition targets AI that bypasses conscious decision-making.

Industry example: A gaming company uses AI to personalise in-app purchase prompts based on real-time analysis of player frustration levels and spending susceptibility. The player never sees the AI at work — they just feel the urge to buy. If this causes financial harm, it’s prohibited.

The grey zone: AI-powered dark patterns at scale. An e-commerce platform that uses AI to dynamically adjust interface elements — hiding cancellation buttons, creating false urgency, manipulating scroll patterns — in ways designed to be undetectable. If the AI is making these choices and the user can’t perceive the manipulation, you’re in prohibited territory.

Article 5(1)(b) — Exploitation of Vulnerabilities

The legal test (cumulative conditions):

  1. An AI system is deployed
  2. It exploits vulnerabilities of a specific person or group
  3. The vulnerability is due to age, disability, or social or economic situation
  4. The exploitation materially distorts behaviour
  5. The distortion causes or is reasonably likely to cause significant harm

What it means: You can serve AI products to elderly users, children, or economically disadvantaged people. What you cannot do is design AI that specifically targets their vulnerabilities to extract behaviour they wouldn’t otherwise exhibit.

Industry example — lending: A platform uses AI to identify users in financial distress — late on rent, multiple declined transactions — and serves them high-interest loan products with AI-optimised urgency copy. The AI is targeting economic vulnerability to drive decisions the person wouldn’t rationally make. Prohibited.

Industry example — children: An AI-powered toy that uses conversational techniques to encourage children to share personal information about their family’s habits or purchases. The child’s developmental vulnerability is being exploited by the system’s design.

Article 5(1)(c) — Social Scoring

The legal test (cumulative conditions):

  1. AI systems evaluate or classify natural persons
  2. Based on their social behaviour OR known, inferred, or predicted personal characteristics
  3. The resulting social score leads to detrimental treatment
  4. The treatment is in contexts unrelated to the original data collection, OR is unjustified or disproportionate to the behaviour

Critical scope: This applies to both public and private actors. It’s not limited to government systems.

Industry example: A property management company uses AI to score prospective tenants based on their social media activity, neighbourhood crime data, and online behaviour patterns. A low score results in automatic rejection. Social behaviour assessment leading to detrimental treatment (denied housing) — prohibited.

The trap: Employee monitoring systems that aggregate behavioural data — attendance patterns, email response times, collaboration metrics, peer feedback — into a single “performance score” driving employment decisions could fall within this prohibition if the scoring leads to treatment unjustified relative to the behaviour being assessed.

Article 5(1)(d) — Individual Criminal Risk Prediction

The legal test (cumulative conditions):

  1. AI assesses the risk of a person committing a criminal offence
  2. Based solely on profiling or assessment of personality traits and characteristics
  3. Not based on objective, verifiable facts directly linked to criminal activity

The keyword: “Solely.” If the prediction is based entirely on who someone is (demographics, personality, neighbourhood) rather than what they’ve done (evidence of criminal activity), it’s prohibited.

What the guidelines clarify: This prohibition applies to law enforcement but can also apply to private actors requested to act on behalf of law enforcement.

Industry example: A retail chain deploys AI surveillance that flags shoppers as “high risk” based on demographic profiling, clothing analysis, and movement patterns — not based on actual suspicious behaviour like concealment of goods. Predicting criminal propensity from personal characteristics is prohibited.

What’s allowed: AI that analyses CCTV footage to detect actual suspicious behaviour (someone concealing merchandise, attempting to bypass security) — this reacts to observable actions, not personal traits.

Article 5(1)(e) — Untargeted Facial Recognition Scraping

The prohibition: Creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

No conditions to evaluate — it’s absolute. You cannot build a facial recognition system by harvesting photos from social media, public websites, or surveillance cameras without specific authorisation. Clearview AI is the poster child — now explicitly illegal in the EU.

Industry example: A security company scrapes publicly available photos from LinkedIn, Facebook, and Instagram to build a facial recognition database for loss prevention. Every image collected this way violates Article 5, regardless of how the database is used.

Article 5(1)(f) — Emotion Recognition in Workplace and Education

The prohibition: Deploying AI systems that infer emotions of natural persons in workplace or educational settings.

Exceptions: Medical or safety purposes (e.g., detecting pilot fatigue, monitoring patient distress in healthcare).

This is the one that catches the most companies off guard. If your call centre software analyses agent tone to assess mood — prohibited. If your LMS tracks student facial expressions to measure engagement — prohibited. If your HR platform uses video interview analysis to infer candidate emotional states — prohibited.

Industry example: A corporate training platform uses webcam monitoring to classify employee attention levels from facial expressions during mandatory training sessions. Emotion recognition in the workplace — banned.

What’s allowed: An AI system in an aircraft cockpit that monitors pilot facial cues for signs of fatigue or medical distress — the safety exception applies.

Article 5(1)(g) — Biometric Categorisation on Sensitive Characteristics

The prohibition: Categorising individuals based on their biometric data to infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

Exceptions: Labelling or filtering lawfully acquired biometric datasets, or law enforcement for specific purposes.

Industry example: A retail analytics company uses in-store cameras with AI that categorises shoppers by inferred ethnicity and religion to “optimise product placement.” Even if no personally identifiable data is stored, the act of biometric categorisation based on sensitive characteristics is prohibited.

Article 5(1)(h) — Real-Time Remote Biometric Identification for Law Enforcement

The prohibition: Using real-time remote biometric identification systems in publicly accessible spaces for law enforcement.

Narrow exceptions (all require authorisation):

  • Searching for specific victims (abduction, trafficking, sexual exploitation)
  • Preventing a genuine and present terrorist threat
  • Locating suspects of specific serious crimes listed in the Act

Even where exceptions apply: A fundamental rights impact assessment (Article 27) and judicial or administrative authorisation are required before deployment. In urgent cases, authorisation can follow within 24 hours, but if refused, use must cease and all data deleted.

Industry example: A city council deploys AI-powered cameras in a public square that continuously scan faces against a database. Unless this falls within the narrow exceptions and has been properly authorised, it’s prohibited.

The Grey Zones: Where Companies Actually Get Caught

Grey ZoneWhy It’s Dangerous
Dark patterns + AI personalisationThe line between “effective UX” and “subliminal manipulation” depends on whether the user can perceive the technique. AI makes manipulation invisible at scale.
Employee wellness platformsSystems marketed as “wellness tools” that track engagement, sentiment, or stress through behavioural analysis may constitute emotion recognition in the workplace.
AI-powered recruitmentVideo interview analysis tools assessing “soft skills” often infer emotional states from facial expressions, tone, and body language.
Dynamic pricing targeting vulnerable usersSurge pricing or personalised pricing that systematically exploits users identified as being in economic distress.
Tenant/customer scoringAggregating social and behavioural data into scores that drive access decisions (housing, services, credit) can constitute social scoring.
“Predictive” security systemsRetail or venue security AI that flags people based on appearance rather than behaviour crosses into prohibited territory.

Industry-Specific Impact of EU AI Act Article 5

Banking & Financial Services

Primary risks: Social scoring elements in customer risk profiling (Article 5(1)(c)), exploitation of financially vulnerable customers through AI-targeted high-interest products (Article 5(1)(b)), emotion recognition in customer service video calls (Article 5(1)(f) if applied to bank employees being assessed).

What to do: Audit every AI system that scores, classifies, or profiles customers. Verify that credit and risk scoring is based on financial data and behaviour, not social media activity or inferred personal characteristics. Ensure no AI component analyses employee emotions during calls. Document your Article 5 screening for each system.

Healthcare

Primary risks: Emotion recognition of healthcare workers in workplace settings (Article 5(1)(f)), biometric categorisation of patients by inferred sensitive characteristics (Article 5(1)(g)), exploitation of patient vulnerabilities through AI-driven treatment upselling (Article 5(1)(b)).

What to do: Healthcare AI that monitors patient emotional states for clinical purposes (pain assessment, mental health screening) likely falls under the medical exception to Article 5(1)(f). But AI monitoring healthcare workers’ emotions for performance assessment does not. Draw the line clearly between clinical and employment use.

Recruitment & HR

Primary risks: This sector faces the widest exposure to Article 5. Emotion recognition in video interviews (Article 5(1)(f)), social scoring through aggregated employee performance metrics (Article 5(1)(c)), subliminal manipulation in candidate engagement platforms (Article 5(1)(a)).

What to do: Strip any emotion inference features from recruitment AI immediately. If your video interview platform analyses facial expressions, vocal tone, or body language to assess candidates — that feature is prohibited. Review employee monitoring systems for social scoring characteristics. Ensure performance metrics are based on work outputs, not behavioural profiling.

Insurance

Primary risks: Exploitation of vulnerable customers through AI-targeted policy upselling during distress (Article 5(1)(b)), social scoring elements in risk assessment using non-insurance behavioural data (Article 5(1)(c)), emotion recognition during claims video calls (Article 5(1)(f)).

What to do: Verify that risk assessment models use actuarial data and insurance-relevant factors, not social behaviour or inferred personal characteristics. If claims processing includes video calls with AI analysis, ensure no emotion recognition component exists. Document the boundary between legitimate risk modelling and prohibited social scoring.

Education

Primary risks: Emotion recognition of students (Article 5(1)(f)) is the headline risk. AI proctoring with facial expression analysis, engagement monitoring through webcam, and attention tracking all potentially fall within this prohibition.

What to do: Disable any AI feature that infers student emotions from biometric data. Proctoring systems that verify identity (biometric verification) are different from systems that assess emotional states — draw the line. Engagement measurement through behavioural analytics (click patterns, time-on-task) is likely fine; engagement measurement through facial analysis is not.

Retail & E-commerce

Primary risks: Subliminal manipulation through AI-personalised dark patterns (Article 5(1)(a)), exploitation of economically vulnerable consumers (Article 5(1)(b)), biometric categorisation of shoppers (Article 5(1)(g)), predictive security profiling (Article 5(1)(d)).

What to do: Review every AI-driven UX element for subliminal characteristics. Dynamic pricing that targets users identified as being in financial distress is prohibited. In-store AI that categorises shoppers by inferred ethnicity or religion — prohibited regardless of whether personal data is stored. Security AI must flag behaviour, not demographics.

Law Enforcement & Security

Primary risks: Real-time remote biometric identification (Article 5(1)(h)), predictive policing from profiling (Article 5(1)(d)), emotion recognition (Article 5(1)(f) in workplace context for officers), untargeted facial scraping (Article 5(1)(e)).

What to do: If you provide AI to law enforcement, verify every system against all eight prohibitions. Real-time biometric ID in public spaces requires proper authorisation through the Article 5 exception framework — FRIA, judicial authorisation, temporal and geographic limits. Technology providers are part of the compliance chain.

Best Practices: Making Peace with Article 5

PracticeWhat To Do
Audit every AI system against Article 5Don’t assume you’re clean. Walk through each prohibition against every AI system in your organisation, including third-party tools and embedded features.
Document the assessmentEven if you conclude a system is not prohibited, document why. Regulators will want to see your reasoning.
Train your product teamsEngineers and product managers need to understand the prohibitions before they design features. Catching a violation in sprint 14 is too late.
Review third-party AI toolsYour vendor’s AI might contain prohibited features you didn’t ask for. If you deploy it, you’re liable.
Create a “red line” checklist for new featuresBefore any AI feature goes to development, screen it against Article 5. Make it a gate in your product process.
Read the Commission guidelinesPublished 4 February 2025. They clarify what’s in scope and what’s not — including personalised advertising, lawful persuasion, and the boundaries of each prohibition.
Monitor the Commission’s reviewThe February 2026 review under Article 112 may expand the list of prohibited practices. Stay current.
When in doubt, don’t deployThe penalties for prohibited practices are the highest in the entire AI Act. If a system is borderline, the risk-reward calculation is simple: don’t.

How EYREACT Can Help

EYREACT’s Rule Engine includes specific rules and indicators mapped to every sub-provision of Article 5 — from 5(1)(a) through 5(1)(h). The platform automatically screens your AI portfolio against each prohibition, documents your assessment reasoning provision by provision, and maintains a timestamped audit trail proving you’ve done the work.

Because when the regulator asks “did you check Article 5?” — the answer needs to be documented, traceable, and evidence-backed. Not “we assumed we were fine.” For best results, book a demo right now.

FAQ

When did Article 5 become enforceable?

2 February 2025. Penalties have been enforceable since 2 August 2025. If you’re operating a prohibited AI system today, you’re already in violation.

What are the penalties for violating Article 5?

Up to €35 million or 7% of global annual turnover, whichever is higher. These are the highest penalties in the entire AI Act. For SMEs and startups, the lower of the fixed amount or percentage applies. Italy has also introduced criminal penalties, including imprisonment for certain AI offences.

Have there been any enforcement actions yet?

As of March 2026, no public enforcement actions for prohibited practices have been announced. However, several investigations are reportedly underway, particularly around workplace emotion recognition and predictive policing. The enforcement landscape is still developing as member states designate competent authorities.

Does “emotion recognition” in Article 5(1)(f) cover text-based sentiment analysis?

The prohibition targets emotion recognition using biometric data (facial expressions, voice tone, physiological signals) in workplace and education settings. Pure text-based sentiment analysis of written content is generally not considered biometric emotion recognition — but if used in a workplace context to infer individual employee emotions and drive employment decisions, the boundary is unclear. Seek specific legal advice.

Our AI personalises ads. Does Article 5(1)(a) apply?

The Commission guidelines explicitly state that personalised advertising is “not inherently manipulative.” Article 5(1)(a) targets AI that uses subliminal techniques — methods the user cannot perceive — to distort behaviour causing significant harm. Transparent personalisation based on stated preferences is fine. AI that invisibly manipulates decision-making below the awareness threshold is not.

We use AI for fraud detection. Is any part of Article 5 relevant?

Fraud detection itself is not prohibited. But specific techniques within a fraud detection system might trigger Article 5. Emotion recognition (analysing customer facial expressions), predictive profiling (flagging customers based on demographics rather than behaviour), or social scoring elements (aggregating behavioural data into disproportionate risk scores) could each trigger different sub-provisions. Assess component by component.

Do the Article 5 prohibitions apply to systems deployed before February 2025?

Yes. There is no grandfather clause for prohibited practices. Unlike some high-risk provisions with transitional arrangements, Article 5 applies to all AI systems regardless of when they were placed on the market.

Can the Commission add more prohibited practices?

Yes. Article 112 requires periodic review of Article 5. The first mandatory review occurred in February 2026 and may result in additional prohibited practices based on emerging evidence of harm. The Commission can also amend the list through delegated acts.

We’re based outside the EU. Does Article 5 apply to us?

Yes. Article 5 applies to any provider placing an AI system on the EU market or putting it into service in the EU, and to any deployer using an AI system within the EU — regardless of where the provider or deployer is established. Same extraterritorial reach as GDPR.

What’s the relationship between Article 5 and GDPR?

They operate in parallel. GDPR restricts certain types of personal data processing. Article 5 prohibits certain AI practices. A single system could violate both — for example, biometric categorisation that both processes special category data (GDPR Article 9) and categorises by sensitive characteristics (AI Act Article 5(1)(g)). Penalties could be cumulative.

Where can I read the Commission guidelines on Article 5?

Published 4 February 2025 as “Guidelines on Prohibited Artificial Intelligence Practices Established by Regulation (EU) 2024/1689.” Available on the European Commission website. These guidelines break each prohibition into cumulative conditions and provide practical examples of what’s in scope and what’s not.

This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.