A couple of months ago, a friend who runs operations at a European logistics company asked me to explain the EU AI Act. “Just give me the summary,” he said. “I don’t need to become a lawyer. I need to know what it is, whether it affects me, and what I have to do about it.”
And that’s what I am hearing, plus or minus a sentiment, from 90% early adopters I talk to.
Fair enough. But every “summary” I could find was either a 200-page legal analysis dressed up as a summary, or a marketing page that told you nothing useful and ended with “contact us to learn more.”
So I wrote the summary I wish someone had given me a year ago. One page you can read in fifteen minutes that actually tells you what this law is, how it works, who it applies to, what you’re required to do, when the deadlines hit, and what happens if you ignore it. With specific guidance for every major industry affected.
Let’s go.
What Is the EU AI Act?
The EU AI Act — officially Regulation (EU) 2024/1689 — is the world’s first comprehensive law regulating artificial intelligence. It was adopted on 13 June 2024, published on 12 July 2024, and entered into force on 1 August 2024. It is not a proposal, draft, or guideline. It is binding law, directly applicable across all 27 EU member states.
The Act regulates the development, placement on the market, and use of AI systems within the EU. It follows a risk-based approach: the higher the risk an AI system poses to people’s health, safety, or fundamental rights, the stricter the rules.
| Essential Fact | Detail |
|---|---|
| Official name | Regulation (EU) 2024/1689 (Artificial Intelligence Act) |
| Type | EU Regulation — directly applicable, no national transposition needed |
| Adopted | 13 June 2024 |
| In force since | 1 August 2024 |
| Scope | 180 recitals, 113 articles, 13 annexes |
| Approach | Risk-based: four tiers with escalating obligations |
| Extraterritorial | Yes — applies to non-EU companies if their AI systems or outputs are used in the EU |
| Maximum penalty | €35 million or 7% of global annual turnover |
| Primary enforcer | National market surveillance authorities + EU AI Office (for GPAI) |
The Risk Pyramid: Four EU AI Act Risk Tiers
Everything in the AI Act flows from one question: how much risk does this AI system pose?
| Risk Tier | What It Means | Examples | Obligations |
|---|---|---|---|
| Unacceptable (Prohibited) | AI practices that are banned outright because they threaten fundamental rights, democracy, or safety | Social scoring, subliminal manipulation, emotion recognition in workplaces, untargeted facial scraping, predictive policing from profiling | Complete ban. No exceptions (narrow law enforcement carve-outs for biometrics). Enforceable since February 2025. |
| High-Risk | AI systems that significantly impact people’s health, safety, or fundamental rights | Credit scoring, recruitment AI, medical diagnostics, biometric identification, critical infrastructure management, education, law enforcement, insurance pricing | Full compliance: risk management, data governance, technical documentation, human oversight, conformity assessment, CE marking, registration, post-market monitoring. Enforceable from August 2026. |
| Limited Risk | AI systems that interact with people or generate content, requiring transparency | Chatbots, deepfake generators, emotion recognition (outside workplace/education), AI-generated text/images/video | Transparency obligations: users must know they’re interacting with AI; AI-generated content must be labelled. Enforceable from August 2026. |
| Minimal Risk | AI systems posing negligible risk | Spam filters, AI-enabled video games, inventory management, recommendation engines (non-consequential) | No specific obligations. Voluntary codes of conduct encouraged. |
Key EU AI Act Definitions
The AI Act introduces its own vocabulary. Here are the definitions that matter most.
| Term | Definition | Why It Matters |
|---|---|---|
| AI system | A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness and that infers, from input, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments | This is intentionally broad. If your software infers outputs from data in a way that influences decisions or environments, it’s likely an AI system under the Act. |
| Provider | An entity that develops an AI system (or has one developed) and places it on the market or puts it into service under its own name or trademark | The provider carries the heaviest obligations. If you build AI and sell it, you’re the provider. |
| Deployer | An entity that uses an AI system under its authority in a professional capacity | If you buy or license AI and use it in your business, you’re the deployer. Separate but real obligations apply. |
| Placing on the market | Making an AI system available for the first time on the EU market for distribution or use, whether for payment or free | Even free AI tools are “placed on the market” if they’re available to EU users. |
| Putting into service | First use of an AI system for its intended purpose directly to the deployer or for own use | Internal AI tools you build for your own company are “put into service.” |
| High-risk AI system | An AI system that is either: (a) a safety component of a product covered by EU product legislation requiring third-party assessment, or (b) listed in one of eight Annex III use case categories and posing significant risk of harm | Classification determines your entire compliance obligation. Get this right. |
| General-purpose AI (GPAI) model | An AI model displaying significant generality, capable of performing a wide range of tasks, and integratable into various downstream systems | Foundation models like GPT, Claude, Gemini, Llama. Separate obligation framework under Chapter V. |
| Systemic risk | Risk specific to the high-impact capabilities of GPAI models, having a significant effect on the EU market due to reach or actual/foreseeable negative effects on health, safety, public security, or fundamental rights | Presumed if trained with ≥10²⁵ FLOPs. Triggers additional obligations. |
| Conformity assessment | Evaluation process verifying that an AI system meets AI Act requirements before market placement | Self-assessment for most Annex III systems. Third-party (notified body) for biometric ID and product-embedded AI. |
| Authorised representative | An entity in the EU mandated by a non-EU provider to act on their behalf for AI Act obligations | Required for non-EU providers of high-risk systems and GPAI models. Without one, you can’t legally sell in the EU. |
Who Has to Do What: EU AI Act Obligations by Role
| Role | Key Obligations |
|---|---|
| Provider (high-risk) | Risk management system, data governance, technical documentation (Annex IV), record-keeping design, instructions for use, human oversight design, accuracy/robustness/cybersecurity, quality management system, conformity assessment, CE marking, EU database registration, post-market monitoring, serious incident reporting, 10-year documentation retention |
| Deployer (high-risk) | Use system per provider’s instructions, assign human oversight to competent staff, monitor system operation, maintain logs (minimum 6 months), transparency to affected individuals, incident reporting to provider and authority, Fundamental Rights Impact Assessment (certain deployers), data protection impact assessment (if GDPR applies) |
| Provider (GPAI standard) | Technical documentation, downstream provider information, copyright compliance policy, training data summary publication |
| Provider (GPAI systemic risk) | All standard GPAI obligations plus model evaluations, adversarial testing, serious incident reporting, cybersecurity protections |
| Importer | Verify provider compliance, CE marking, documentation availability before placing system on EU market |
| Distributor | Verify CE marking and documentation; take corrective action if non-conformity identified |
| Authorised representative | Maintain documentation copies, cooperate with authorities, provide information on request, terminate mandate if provider non-compliant |
EU AI Act Timeline: The Dates to Keep in Mind
| Date | Status | What Applies |
|---|---|---|
| 1 Aug 2024 | ✅ Done | AI Act enters into force |
| 2 Feb 2025 | ✅ Enforceable | Prohibited practices banned; AI literacy obligations |
| 2 Aug 2025 | ✅ Enforceable | GPAI obligations; governance bodies operational; penalty regime live |
| 2 Aug 2026 | Nearly there | High-risk AI (Annex III); transparency rules; regulatory sandboxes; GPAI enforcement powers |
| 2 Aug 2027 | Upcoming | Product-embedded AI (Annex I); legacy GPAI compliance |
| 2 Aug 2030 | Future | Public authority legacy systems |
| 31 Dec 2030 | Future | Large-scale IT systems (Annex X) |
Digital Omnibus note: The Commission proposed extending high-risk deadlines to December 2027 (Annex III) and August 2028 (Annex I). This proposal is being debated and has not been adopted. Plan for August 2026.
EU AI Act Penalties
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | €35M or 7% of global turnover |
| High-risk non-compliance | €15M or 3% of global turnover |
| Incorrect information to authorities | €7.5M or 1% of global turnover |
| SME/startup reduction | Lower of the fixed amount or percentage (not higher) |
Industry-Specific EU AI Act Summaries
EU AI Act for Banking & Financial Services
What’s affected: Credit scoring, creditworthiness assessment, fraud detection, KYC/AML automation, algorithmic trading decisions, insurance risk pricing, loan approval automation.
Risk classification: Credit scoring and creditworthiness are explicitly high-risk under Annex III (5b). Life and health insurance risk assessment and pricing under Annex III (5c). Emergency call triage under Annex III (5d).
What to do first: Inventory every AI system that touches credit, lending, insurance, or customer risk decisions. Most will be high-risk. Start technical documentation and risk management now. Deployers (banks using third-party AI) must ensure human oversight of automated financial decisions and conduct FRIAs for essential service access. Coordinate with DORA requirements for ICT third-party risk and MiFID II for algorithmic trading obligations.
Overlap with existing regulation: GDPR Article 22 (automated decision-making), DORA (ICT risk management), MiFID II (algorithmic trading), CRD/CRR (model risk), EBA guidelines on AI in credit.
EU AI Act Summary for Healthcare
What’s affected: AI diagnostics, clinical decision support, medical image analysis, drug discovery AI, patient triage, health insurance risk assessment.
Risk classification: AI in medical devices is high-risk under Annex I (Medical Devices Regulation). Healthcare triage is high-risk under Annex III (5d). Health insurance pricing under Annex III (5c).
What to do first: Medical AI faces dual regulation — AI Act and MDR. Conformity assessment through a notified body designated under MDR, covering both frameworks. Ensure human oversight means clinicians can interpret and override AI recommendations. Start notified body engagement now — assessment takes 9-24 months.
Overlap with existing regulation: Medical Devices Regulation (MDR), IVDR, GDPR Article 9 (health data), national healthcare data laws.
EU AI Act Summary for Recruitment & HR
What’s affected: CV screening, video interview analysis, candidate ranking, automated skill matching, workforce management, performance monitoring, task allocation, promotion and termination decisions.
Risk classification: Employment and worker management is explicitly high-risk under Annex III (category 4). Nearly all AI touching hiring or HR decisions qualifies.
What to do first: Audit every AI tool in your recruitment and HR stack — including third-party SaaS platforms. As deployer, you need human oversight of hiring decisions, transparency to candidates, and monitoring for bias. If you’ve customised vendor AI substantially, you may have become the provider under Article 25. Watch for emotion recognition features in video interview tools — these may be prohibited in workplace settings.
Overlap with existing regulation: GDPR Article 22, national employment law, works council requirements (Germany), CNIL guidance (France), Platform Work Directive.
Education and EU AI Act
What’s affected: Admissions algorithms, automated grading, exam proctoring, adaptive learning platforms, student performance monitoring, learning outcome assessment.
Risk classification: Education is explicitly high-risk under Annex III (category 3) when AI determines access to education, evaluates learning outcomes, monitors exam behaviour, or adapts instruction level.
What to do first: Educational institutions deploying third-party AI are deployers with obligations for human oversight (educators must retain decision-making authority), transparency to students and parents, and log retention. Proctoring systems using facial recognition trigger biometric obligations and potentially dual high-risk classification.
Overlap with existing regulation: GDPR (student data), national education laws, children’s data protection rules.
EU AI Act Summary for Insurance
What’s affected: Risk assessment, pricing algorithms, claims processing, fraud detection, underwriting automation, customer segmentation.
Risk classification: Life and health insurance risk assessment and pricing is explicitly high-risk under Annex III (5c). Other insurance AI may be high-risk depending on its impact on access to essential services.
What to do first: Insurers using AI for risk assessment must conduct FRIAs, ensure transparency to policyholders about AI’s role in pricing, and implement human oversight of consequential decisions. If AI pricing systematically disadvantages demographic groups, demonstrate that the fundamental rights impact has been assessed and mitigated.
Overlap with existing regulation: GDPR, Solvency II, DORA, national insurance regulation, anti-discrimination law.
Manufacturing & Critical Infrastructure: EU AI Act Summary
What’s affected: AI-controlled robotics, predictive maintenance, quality control AI, smart grid management, traffic management, water/gas/heating/electricity supply management.
Risk classification: AI as safety component of machinery — high-risk under Annex I (Machinery Regulation). AI in critical infrastructure management — high-risk under Annex III (category 2).
What to do first: Identify AI systems that are safety components of regulated products. These need notified body conformity assessment under product legislation. Deadline is August 2027 (not 2026). But notified body engagement takes time — start now. For critical infrastructure AI, standard Annex III compliance (self-assessment) applies from August 2026.
Overlap with existing regulation: Machinery Regulation, NIS2 (cybersecurity), sector-specific safety standards.
EU AI Act for Law Enforcement & Migration
What’s affected: Predictive policing, evidence evaluation, risk assessment of individuals, polygraph/lie detection AI, border control, visa processing, asylum application examination.
Risk classification: Law enforcement AI is high-risk under Annex III (categories 6-7). Real-time remote biometric identification in public spaces is prohibited (with narrow exceptions). Predictive policing based solely on profiling is prohibited.
What to do first: Immediately verify no prohibited practices are in operation — these have been enforceable since February 2025. For permitted high-risk law enforcement AI, mandatory third-party conformity assessment for biometric identification. FRIAs required. National authorising procedures for real-time biometric exceptions.
Overlap with existing regulation: Law Enforcement Directive, GDPR, national criminal justice laws, fundamental rights frameworks.
Technology & SaaS: EU AI Act Summary
What’s affected: AI features in enterprise software, chatbots, recommendation engines, content moderation, generative AI tools, analytics platforms.
Risk classification: Varies enormously. A customer service chatbot is limited risk (transparency only). An AI feature that screens job applications is high-risk. A content recommendation engine is minimal risk — unless it exploits user vulnerabilities. Classification depends on the use case, not the technology.
What to do first: Map every AI feature in your product against the risk classification framework. Don’t assume “we’re just a SaaS company” means low risk — it depends on what your AI does and who it affects. If any EU customers use your AI for high-risk purposes, provider obligations may apply to you regardless of your location.
Overlap with existing regulation: GDPR, Digital Services Act (content moderation), Digital Markets Act (gatekeepers), Copyright Directive (GPAI training data).
How EYREACT Can Help
EYREACT was built to turn these summaries into one action plan. Our platform automates risk classification, maps every obligation to Living Compliance Binders, tracks evidence collection in real time, and shows you exactly where your gaps are — across your entire AI portfolio.
400+ rules derived directly from the regulation. Every article. Every annex. One dashboard. Book a demo!
FAQ
What is the EU AI Act in simple terms?
It’s a law that regulates AI systems in Europe based on how much risk they pose to people. The riskier the AI, the stricter the rules. Some AI practices are banned entirely. High-risk AI systems must meet extensive safety and documentation requirements. Lower-risk AI just needs to be transparent. It applies to anyone whose AI is used in the EU, regardless of where their company is based.
Is the EU AI Act already in effect?
Yes. It entered into force on 1 August 2024. Prohibited practices have been enforceable since February 2025. GPAI obligations since August 2025. High-risk obligations apply from August 2026.
Does it apply to my company if I’m based outside the EU?
Yes, if your AI system is placed on the EU market, put into service in the EU, or produces outputs used within the EU. Same extraterritorial principle as GDPR.
What counts as an “AI system” under the Act?
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness, and that infers from input how to generate outputs (predictions, content, recommendations, decisions) that influence physical or virtual environments. The definition is intentionally broad.
How do I know if my AI system is high-risk?
Check two things: (1) Is it a safety component of a product requiring third-party conformity assessment under EU product legislation? (2) Does it fall within one of the eight Annex III use case categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice)? If yes to either, and it poses significant risk of harm, it’s high-risk.
What’s the difference between a provider and a deployer?
The provider develops the AI system and places it on the market. The deployer uses it in a professional capacity. The provider has heavier obligations (documentation, conformity assessment, CE marking). The deployer has independent obligations (human oversight, monitoring, transparency). Both are accountable.
What are the penalties?
Up to €35 million or 7% of global turnover for prohibited practices. Up to €15 million or 3% for high-risk non-compliance. Up to €7.5 million or 1% for incorrect information. SMEs and startups get the lower of the fixed amount or percentage.
What’s the Digital Omnibus?
A Commission proposal from November 2025 that would extend high-risk deadlines, relax certain requirements, and simplify implementation. It hasn’t been adopted. Don’t plan around it.
How does the AI Act relate to GDPR?
They’re complementary. GDPR protects personal data. The AI Act regulates AI systems (whether or not they process personal data). Most high-risk AI systems process personal data, so both apply simultaneously. Impact assessments can be combined. Existing GDPR compliance provides roughly 40% of the foundation for AI Act compliance.
Where do I start?
Inventory your AI systems. Classify them by risk tier. Determine your role (provider or deployer). Check for prohibited practices immediately. Start documentation for high-risk systems. Budget for compliance. Don’t wait for the deadline.
This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.