The EU AI Act (Regulation (EU) 2024/1689) is the world’s first binding legal framework for artificial intelligence. It regulates how AI systems are developed, placed on the market, and used across the European Union.
Has the EU Passed the AI Act?
Yes. The European Parliament adopted the AI Act on 13 March 2024. It entered into force on 1 August 2024 following publication in the Official Journal of the European Union (OJ L, 2024/1689, 12 July 2024).
The Act does not apply all at once. Obligations take effect in phases across 2025 and 2026, with the prohibition provisions applying from 2 February 2025 and the high-risk AI provisions applying from 2 August 2026.
What Are the Main Points of the EU AI Act?
European Union Artificial Intelligence Act (EU AI Act) regulates AI systems through a risk-based structure. It assigns obligations to four categories of AI activity: prohibited practices, high-risk systems, limited-risk systems, and general-purpose AI models.
Obligations fall on four types of operator: providers (those who develop or place AI systems on the market), deployers (organisations that use AI systems in a professional context), importers, and distributors.
EU AI Act Risk Pyramid
| Risk Level | Examples | Regulatory Response |
|---|---|---|
| Unacceptable risk | Social scoring by public authorities; real-time biometric surveillance in public spaces; subliminal manipulation systems | Prohibited outright |
| High risk | AI used in recruitment, credit scoring, education, law enforcement, critical infrastructure | Mandatory conformity assessment, technical documentation, human oversight |
| Limited risk | Chatbots, deepfake generators, emotion recognition systems | Transparency obligations to end users |
| Minimal risk | Spam filters, AI in video games | No mandatory obligations (voluntary codes encouraged) |
What the Act Prohibits
Article 5 lists AI practices that are banned entirely from 2 February 2025. The prohibited practices include:
- AI systems that use subliminal techniques to distort behaviour in ways that cause harm
- AI that exploits vulnerabilities of specific groups (children, elderly, persons with disabilities)
- Real-time remote biometric identification systems in publicly accessible spaces used by law enforcement, with narrow exceptions for serious crime investigation subject to judicial authorisation
- AI systems used to create or expand facial recognition databases through untargeted scraping
- Emotion recognition in workplace or educational contexts
- Biometric categorisation systems that infer sensitive characteristics such as race, political opinion, or sexual orientation
- Social scoring systems operated by public authorities
Operators who deploy any of these systems face fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher (Article 99(2)).
High-Risk AI: What the Obligations Require
High-risk AI systems carry the most demanding compliance obligations. Providers must satisfy eight categories of requirement before placing such a system on the market.
| Obligation | Article | What It Requires |
|---|---|---|
| Risk management system | Art. 9 | Continuous identification, analysis, and mitigation of risks throughout the lifecycle |
| Data and data governance | Art. 10 | Training data must be relevant, representative, and free from errors so far as possible |
| Technical documentation | Art. 11 + Annex IV | Full documentation of the system’s design, development, testing, and performance |
| Record-keeping and logging | Art. 12 | Automatic logging of events sufficient to enable post-market monitoring and incident investigation |
| Transparency to deployers | Art. 13 | Instructions for use covering capabilities, limitations, foreseeable misuse, and human oversight requirements |
| Human oversight | Art. 14 | Oversight measures allowing humans to monitor, interpret, and override system outputs |
| Accuracy, robustness and cybersecurity | Art. 15 | Defined performance metrics and resilience against adversarial attacks |
| Quality management system | Art. 17 | Documented QMS covering design, development, post-market monitoring, and corrective action |
Deployers of high-risk AI systems carry separate obligations under Article 26, including conducting fundamental rights impact assessments before deployment in certain contexts.
Annex III: Categories of High-Risk AI Systems
The following categories of AI system are classified as high-risk under Annex III:
- Biometric identification and categorisation of natural persons
- Management and operation of critical infrastructure
- Education and vocational training (access, assessment, monitoring)
- Employment, workers management, and access to self-employment
- Access to and enjoyment of essential private services and public services and benefits
- Law enforcement
- Migration, asylum, and border control management
- Administration of justice and democratic processes
Not every AI application in these sectors is automatically high-risk. Providers must apply the Article 6 screening test to confirm classification.
General-Purpose AI Models
The Act introduced GPAI model obligations, covering providers of foundation models such as large language models and multimodal systems.
All GPAI model providers must maintain technical documentation, provide downstream providers with information necessary for compliance, and publish a summary of the training data used.
Providers whose models are classified as posing systemic risk — currently defined by a training compute threshold of 10²⁵ FLOPs under Article 51 — face additional obligations: adversarial testing, systemic risk assessment, serious incident reporting to the European AI Office, and enhanced cybersecurity measures.
Conformity Assessment
Before placing a high-risk AI system on the market, providers must complete a conformity assessment demonstrating that the system meets all requirements of Chapter III, Section 2.
For most high-risk systems listed in Annex III, providers may conduct a self-assessment using internal controls (Annex VI). Systems covered by Annex I sector legislation — medical devices, machinery, aviation — require third-party assessment by a notified body under Annex VII.
Where a high-risk AI system undergoes a substantial modification after market placement, the provider must conduct a new conformity assessment before continued use.
Transparency Obligations for Limited-Risk AI
Providers and deployers of AI systems that interact directly with users must inform those users that they are interacting with an AI system, unless the AI nature of the interaction is obvious from context.
Providers of systems that generate synthetic audio, video, text, or images must ensure outputs are marked in a machine-readable format identifying the content as AI-generated (Article 50(2)). This obligation applies to deepfakes and AI-generated media intended for public dissemination.
Enforcement and Penalties
| Infringement Category | Maximum Fine |
|---|---|
| Prohibited practices (Art. 5) | EUR 35,000,000 or 7% of global annual turnover |
| Other obligations (providers, deployers, importers) | EUR 15,000,000 or 3% of global annual turnover |
| Incorrect or misleading information to authorities | EUR 7,500,000 or 1.5% of global annual turnover |
| GPAI model providers (most violations) | EUR 15,000,000 or 3% of global annual turnover |
For SMEs and start-ups, fines are capped at the lower of the percentage threshold or the fixed amount.
Key Enforcement Dates
| Date | What Takes Effect |
|---|---|
| 2 February 2025 | Prohibited AI practices (Art. 5) |
| 2 August 2025 | GPAI model obligations (Title VIII); governance rules |
| 2 August 2026 | High-risk AI systems (Annex III); deployer obligations; conformity assessment |
| 2 August 2027 | High-risk AI systems embedded in Annex I regulated products |
Territorial Scope of EU AI Act
The AI Act applies to providers who place AI systems on the EU market regardless of where they are established. A provider based in the United States or United Kingdom that supplies an AI system to EU customers must comply.
Third-country providers must appoint an EU-established authorised representative before placing high-risk AI systems or GPAI models on the market (Article 22; Article 54 for GPAI). Deployers established in the EU are subject to the Act regardless of where their AI system provider is located.
Who Enforces the AI Act?
Each Member State must designate at least one national competent authority to act as market surveillance authority and notifying authority. These authorities will investigate complaints, conduct audits, and impose penalties at national level.
The European AI Office, established within the European Commission, oversees GPAI model compliance across the EU. It has powers to request documentation, conduct evaluations, and impose fines on GPAI model providers directly.
Key Definitions You Must Know to Understand EU AI Act
AI System: A machine-based system designed to operate with varying levels of autonomy, that infers from inputs how to generate outputs such as predictions, content, recommendations, or decisions affecting real or virtual environments (Article 3(1)).
Provider: A natural or legal person who develops an AI system or general-purpose AI model and places it on the market or puts it into service under their own name or trademark (Article 3(3)).
Deployer: A natural or legal person who uses an AI system under their authority in a professional context, except for personal non-professional use (Article 3(4)).
High-Risk AI System: An AI system listed in Annex III of the Act, or one that forms a safety component of a product covered by EU harmonisation legislation listed in Annex I.
Substantial Modification: A change to an AI system after its market placement that affects the system’s compliance with the Act or changes the intended purpose (Article 3(23)).
General-Purpose AI Model (GPAI): An AI model trained on large amounts of data, capable of serving a wide range of purposes, and integrated into various downstream systems (Article 3(63)).
Systemic Risk: For GPAI models, risks arising from high-impact capabilities that could have significant negative effects at Union scale, including critical infrastructure disruption, loss of control over AI decision-making, or large-scale dissemination of harmful content (Article 3(65)).
Authorised Representative: A natural or legal person established in the EU, designated in writing by a third-country provider to act on their behalf with respect to AI Act obligations (Article 3(5)).
Notified Body: A conformity assessment body designated by a Member State authority to carry out third-party conformity assessments of high-risk AI systems where required by Annex VII.
Post-Market Monitoring (PMM): The systematic process of collecting and reviewing experience from AI systems placed on the market, to identify and address risks emerging during operation (Article 72).
Fundamental Rights Impact Assessment (FRIA): An assessment required of certain deployers to evaluate the impact of deploying a high-risk AI system on the rights of affected persons (Article 27).
Intended Purpose: The use for which an AI system is designed, including the specific context and conditions of use specified by the provider in the documentation, instructions for use, or promotional materials (Article 3(12)).
Frequently Asked Questions
Does the AI Act apply to AI systems developed outside the EU?
Yes. The Act applies to any provider who places an AI system on the EU market or puts it into service within the EU, irrespective of where the provider is established. Providers outside the EU must appoint an authorised representative within the EU for high-risk systems and GPAI models.
Is every AI system subject to the same requirements?
No. The obligations vary significantly by risk category. A chatbot subject to transparency requirements faces far lighter obligations than a recruitment AI system classified as high-risk under Annex III.
What is the AI Act’s relationship to GDPR?
The AI Act operates alongside GDPR rather than replacing it. Where AI systems process personal data, both frameworks apply simultaneously. Deployers carrying out fundamental rights impact assessments under Article 27 of the AI Act should coordinate those assessments with any DPIA obligations under GDPR Article 35.
What is a high-risk AI system for the purposes of the Act?
A high-risk AI system is either (a) an AI system that forms a safety component of a product covered by EU harmonisation legislation listed in Annex I, or (b) an AI system listed specifically in Annex III of the Act across the eight high-risk categories.
What is the role of the European AI Office?
The European AI Office is the EU’s central body for AI governance. It oversees GPAI model compliance, coordinates enforcement across Member States, develops codes of practice, and can directly investigate and fine GPAI model providers.
Does the AI Act apply to open-source AI models?
The Act provides a partial exemption for open-source GPAI models. Providers of GPAI models whose parameters are publicly available are exempt from most GPAI obligations, except where the model poses systemic risk, in which case the full set of systemic risk obligations applies regardless of licensing terms (Article 53(2)).
When must a deployer conduct a fundamental rights impact assessment?
Deployers who are public bodies, or who deploy high-risk AI systems for use in areas affecting natural persons, must conduct a fundamental rights impact assessment before deployment in certain contexts specified in Article 27. The assessment must cover the populations affected, the rights at risk, and the mitigations in place.
What counts as a substantial modification requiring a new conformity assessment?
A modification is substantial if it affects the AI system’s compliance with the Act or changes the intended purpose in a way not anticipated in the original assessment. Providers should apply the Article 83 screening test to every significant system update and document the outcome.