February 6, 2026 12 mins read

EU AI Act Timeline: Every Key Date from 2024 to 2030

Last updated: March 2026

The EU AI Act doesn’t switch on overnight. Since entering into force on 1 August 2024, its obligations are being phased in over three years — with different deadlines applying to different risk categories, roles in the supply chain, and types of AI systems. Some deadlines have already passed. Others are months away. And the Digital Omnibus proposal may push certain dates further.

This guide provides the complete implementation timeline, explains what applies at each phase, clarifies who is affected, and covers the potential impact of the Digital Omnibus on your compliance roadmap.

The Complete EU AI Act Timeline at a Glance

DateWhat HappensWho Is Affected
1 August 2024AI Act enters into forceEveryone — the clock starts
2 February 2025Prohibited AI practices banned; AI literacy obligations applyAll operators of AI systems
2 August 2025GPAI model obligations apply; governance infrastructure must be in place; penalty regime takes effect; member states designate national competent authoritiesGPAI model providers; member states
2 February 2026Commission publishes guidelines on high-risk AI classificationAll operators of high-risk AI systems
2 August 2026High-risk AI obligations (Annex III) apply; transparency obligations (Article 50) apply; regulatory sandboxes must be operational; GPAI penalty enforcement beginsProviders and deployers of high-risk AI systems; all AI system operators with transparency obligations
2 August 2027High-risk AI obligations for product-embedded systems (Annex I) apply; GPAI models placed on market before August 2025 must be compliant; full enforcement for all remaining provisionsProviders of AI systems embedded in regulated products; legacy GPAI providers
2 August 2030Legacy high-risk AI systems used by public authorities must be brought into compliancePublic authorities using pre-existing high-risk AI systems
31 December 2030Large-scale IT systems (Annex X) in freedom, security, and justice must complyOperators of large-scale EU IT systems

Phase-by-Phase EU AI Act Timeline Breakdown

Phase 1: Entry into Force (1 August 2024)

The AI Act was formally adopted on 13 June 2024 and entered into force on 1 August 2024. No obligations applied immediately — this date simply started the phased implementation clock.

Phase 2: Prohibitions and AI Literacy (2 February 2025) ✅ ALREADY IN EFFECT

Six months after entry into force, the first substantive obligations kicked in. These are now enforceable.

Prohibited AI practices (Article 5) are banned outright:

Prohibited PracticeDescription
Social scoringAI systems that evaluate individuals based on social behaviour or personality characteristics for detrimental treatment
Subliminal manipulationAI systems deploying subliminal techniques to materially distort behaviour, causing significant harm
Exploitation of vulnerabilitiesAI systems exploiting vulnerabilities of specific groups (age, disability, social or economic situation)
Real-time remote biometric identificationIn publicly accessible spaces for law enforcement (with limited exceptions)
Emotion recognition in workplace/educationAI systems inferring emotions of employees or students (with limited exceptions)
Untargeted scraping for facial recognitionCreating facial recognition databases through untargeted scraping of images from internet or CCTV
Biometric categorisation on sensitive characteristicsCategorising individuals based on biometric data to infer race, political opinions, religious beliefs, sexual orientation
Predictive policing based solely on profilingAI systems making risk assessments of individuals predicting criminal offences based solely on profiling or personality traits

AI literacy (Article 4): All providers and deployers of AI systems must ensure their staff has a sufficient level of AI literacy. Note: the Digital Omnibus proposes to shift this responsibility to member states and the Commission instead.

Penalties for prohibited practices: Up to €35 million or 7% of global annual turnover, whichever is higher. This is the highest penalty tier in the AI Act.

Phase 3: GPAI and Governance (2 August 2025) ✅ ALREADY IN EFFECT

One year after entry into force, obligations for general-purpose AI models and the institutional framework kicked in.

GPAI model providers must now comply with:

ObligationDescription
Technical documentationMaintain and make available detailed technical documentation about the model
Training data transparencyPublish a sufficiently detailed summary of training data content
Copyright complianceImplement a policy to comply with EU copyright law, particularly the text and data mining opt-out
Downstream provider informationProvide information and documentation to downstream providers integrating the model

GPAI models with systemic risk face additional obligations: adversarial testing, incident monitoring and reporting to the AI Office, cybersecurity protections, and energy consumption reporting.

Governance infrastructure now operational: The EU AI Office, the AI Board (member state representatives), the Scientific Panel, and the Advisory Forum are all operational. Member states were required to designate national competent authorities (notifying authorities and market surveillance authorities) and communicate them to the Commission by this date.

Penalty regime: Administrative fines are now enforceable for most obligations — up to €35M/7% for prohibited practices, up to €15M/3% for other violations, and up to €7.5M/1% for supplying incorrect information. However, penalties specific to GPAI model providers are deferred until August 2026.

Code of Practice: The GPAI Code of Practice was published on 10 July 2025, providing guidance for GPAI providers. A second draft of the Code of Practice on marking and labelling AI-generated content was published in March 2026.

Phase 4: The Big One — High-Risk AI and Transparency (2 August 2026)

This is the date that matters most for the majority of organisations. The bulk of the AI Act’s obligations become enforceable.

High-risk AI systems (Annex III) must comply with:

RequirementArticleDescription
Risk management systemArt. 9Continuous, iterative risk identification, assessment, and mitigation throughout the AI system lifecycle
Data governanceArt. 10Quality criteria for training, validation, and testing datasets
Technical documentationArt. 11Comprehensive documentation per Annex IV before market placement
Record-keepingArt. 12Automatic logging of events throughout the system’s lifetime
Transparency and informationArt. 13Instructions for use enabling deployers to interpret outputs and use the system appropriately
Human oversightArt. 14Technical measures enabling effective human oversight of the AI system
Accuracy, robustness, cybersecurityArt. 15Appropriate levels of accuracy, robustness, and cybersecurity throughout the lifecycle
Quality management systemArt. 17Documented QMS covering all aspects of compliance
Conformity assessmentArt. 43Assessment of compliance before market placement (self-assessment or third-party depending on system type)
EU database registrationArt. 49Registration in the EU public database before market placement
Post-market monitoringArt. 72Monitoring system proportionate to the nature of the AI system and its risks

High-risk use cases under Annex III include:

CategoryExamples
BiometricsRemote biometric identification, emotion recognition, biometric categorisation
Critical infrastructureSafety components in management of road traffic, water, gas, heating, electricity
EducationSystems determining access to education, evaluating learning outcomes, monitoring prohibited behaviour during exams
EmploymentRecruitment, CV screening, task allocation, performance monitoring, promotion/termination decisions
Access to essential servicesCredit scoring, creditworthiness assessment, risk assessment in life and health insurance
Law enforcementIndividual risk assessment, polygraphs, evidence evaluation, crime prediction
Migration and border controlRisk assessment, document verification, asylum application examination
Administration of justiceResearch and interpretation assistance for courts

Transparency obligations (Article 50) apply: AI systems interacting with people (chatbots) must disclose their artificial nature. AI-generated content (deepfakes, synthetic media) must be labelled. Emotion recognition and biometric categorisation systems must inform users.

Additional milestones on this date: Each member state must have at least one operational AI regulatory sandbox. Penalties for GPAI model providers become enforceable. Market surveillance authorities gain full investigatory and enforcement powers.

Phase 5: Product-Embedded AI and Legacy Systems (2 August 2027)

The remaining provisions apply, completing full enforcement.

Annex I systems — high-risk AI systems that are safety components of regulated products (medical devices, machinery, vehicles, toys, lifts, radio equipment, civil aviation, etc.) — must comply with all high-risk requirements.

Legacy GPAI models placed on the market before August 2025 must have taken all necessary compliance steps by this date.

Full enforcement: All remaining provisions are applicable. The AI Act is fully effective.

Phase 6: Long-Term Compliance (2030)

Public authority legacy systems: High-risk AI systems used by public authorities that were placed on the market or put into service before August 2026 must be brought into compliance by 2 August 2030.

Large-scale IT systems: AI systems that are components of large-scale IT systems in the area of freedom, security, and justice (listed in Annex X) must be brought into compliance by 31 December 2030.

The Digital Omnibus: What Might Change

In November 2025, the European Commission published the Digital Omnibus proposal, which could significantly alter the timeline for high-risk AI compliance. The proposal is currently being debated by the European Parliament and Council.

Key proposed changes:

Current DeadlineProposed New DeadlineWhat’s Affected
2 August 2026Up to 2 December 2027High-risk AI systems under Annex III
2 August 2027Up to 2 August 2028High-risk AI systems under Annex I (product-embedded)
2 August 20262 February 2027AI-generated content marking (Article 50(2)) for systems already on market

How the delay mechanism works: The high-risk obligations would not kick in until the Commission confirms that adequate compliance support tools (harmonised standards, common specifications, guidelines) are available. Once confirmed, obligations apply six months later for Annex III systems and twelve months later for Annex I AI systems. The backstop dates (December 2027 and August 2028) apply regardless.

Other proposed changes:

ChangeImpact
AI literacy obligation shifted to member states and CommissionReduces direct burden on providers and deployers
Registration requirement removed for systems self-assessed as not high-riskReduces administrative overhead
SME simplifications extended to small mid-cap enterprises (under 750 employees, under €150M revenue)Wider access to simplified compliance
Processing of sensitive personal data allowed for bias detection and correctionRemoves a significant barrier to responsible AI development
Codes of practice become soft law onlyCommission loses power to make them binding
EU-level AI regulatory sandbox established under AI OfficeNew centralised testing environment

Critical caveat: The Digital Omnibus is a proposal, not law. It must pass through Parliament and Council negotiations. Compliance experts uniformly advise treating August 2026 as the binding deadline until any changes are formally adopted. Organisations banking on the delay are taking a significant compliance risk.

In February 2026, rapporteurs in the European Parliament proposed setting fixed deadlines of December 2027 (Annex III) and August 2028 (Annex I), with MEPs proposing further amendments on issues ranging from AI-generated sexual deepfakes to AI Office resourcing. The final outcome remains uncertain.

How EYREACT Can Help

With enforcement in August 2026, the compliance window is closing. EYREACT automates EU AI Act compliance from risk classification to audit-ready documentation, giving you the structured infrastructure to meet every deadline on this timeline.

Our platform tracks 400+ rules derived directly from the AI Act, manages evidence across Living Compliance Binders, and monitors & updates your compliance status in real time so you always know where you stand.

Book a demo

Timeline by Role in the AI Value Chain

Different actors face different deadlines. Here’s when key obligations become applicable by role:

RoleFeb 2025Aug 2025Aug 2026Aug 2027
Provider (high-risk, Annex III)Prohibited practices; AI literacyFull high-risk obligations, QMS, conformity assessment, registration, post-market monitoring
Provider (high-risk, Annex I)Prohibited practices; AI literacyFull high-risk obligations
Provider (GPAI, new models)Prohibited practices; AI literacyTechnical documentation, transparency, copyright, downstream infoGPAI penalty enforcement
Provider (GPAI, legacy models)Prohibited practices; AI literacyFull GPAI compliance
Deployer (high-risk)Prohibited practices; AI literacyHuman oversight, input data monitoring, information obligations, FRIA
ImporterProhibited practices; AI literacyVerify provider compliance, CE marking, documentationAnnex I product verification
DistributorProhibited practices; AI literacyVerify CE marking, provider/importer complianceAnnex I product verification
Authorised RepresentativeAR obligations under Art. 22: documentation custody, authority liaison, compliance verification

What You Should Be Doing Right Now to Beat the Clock?

Here’s a prioritised action plan to catch up with EU AI Act timeline:

PriorityActionWhy Now
ImmediateComplete AI system inventory and risk classificationYou need to know what you have before you can comply
ImmediateDetermine your role per system (provider, deployer, both)Obligations differ significantly by role
ImmediateVerify no prohibited practices are in useThese have been enforceable since February 2025
HighBegin risk management documentation for high-risk systemsArticle 9 requires continuous, iterative risk management — start now
HighStart technical documentation per Annex IVThis is extensive and cannot be done last minute
HighEstablish quality management systemArticle 17 requires documented QMS covering all compliance areas
HighImplement human oversight measuresArticle 14 requires technical measures enabling effective oversight
MediumPlan conformity assessment approachSelf-assessment vs third-party — know which applies to you
MediumPrepare for EU database registrationRegistration must happen before market placement
MediumEstablish post-market monitoring systemArticle 72 requires ongoing monitoring proportionate to risk
OngoingMonitor Digital Omnibus negotiationsTimeline may shift, but don’t count on it


Penalty Structure

ViolationMaximum FineEffective From
Prohibited AI practices (Article 5)€35 million or 7% of global annual turnoverFebruary 2025
Non-compliance with high-risk obligations€15 million or 3% of global annual turnoverAugust 2026
Supplying incorrect information to authorities€7.5 million or 1% of global annual turnoverAugust 2025
SME/startup penalty reductionsLower of the two amounts (fixed vs percentage)Same as above

FAQ

When does the EU AI Act come into full effect?

The AI Act entered into force on 1 August 2024, but its obligations apply in phases. The most significant deadline for most organisations is 2 August 2026, when high-risk AI system requirements and transparency obligations become enforceable. Full enforcement, including product-embedded AI systems, is complete by 2 August 2027.

Has the August 2026 deadline been delayed?

Not yet. The European Commission’s Digital Omnibus proposal (November 2025) suggests extending high-risk deadlines to December 2027 (Annex III) and August 2028 (Annex I), conditional on the availability of harmonised standards. However, this proposal is still being negotiated and has not been adopted. Until formally passed, August 2026 remains the binding deadline.

What happens if I miss the August 2026 deadline?

Non-compliance with high-risk AI obligations can result in administrative fines of up to €15 million or 3% of global annual turnover, whichever is higher. Market surveillance authorities can also order withdrawal of AI systems from the market, mandate corrective actions, or restrict market access.

Do the rules apply to AI systems already on the market?

Yes. The AI Act applies to all operators of high-risk AI systems in place before August 2026. There is no blanket grandfather clause. However, high-risk systems lawfully on the market before the rules apply can continue operating without new certification if no significant design changes occur — this was clarified in the Digital Omnibus proposal.

Does the AI Act apply outside the EU?

Yes. Like the GDPR, the AI Act has extraterritorial reach. It applies to any provider placing an AI system on the EU market or putting it into service in the EU, regardless of where the provider is established. Non-EU providers must also appoint an authorised representative in the EU for high-risk systems and GPAI models.

What is the difference between Annex I and Annex III high-risk systems?

Annex I covers AI systems that are safety components of products already regulated by EU harmonisation legislation (medical devices, machinery, vehicles, etc.). These have until August 2027. Annex III lists specific use cases classified as high-risk by their application domain (employment, credit scoring, law enforcement, etc.). These must comply by August 2026.

When do GPAI obligations apply?

GPAI model obligations have been enforceable since 2 August 2025 for new models. Legacy GPAI models (placed on the market before August 2025) have until 2 August 2027 to comply.

What is the GPAI Code of Practice?

Published on 10 July 2025, the Code of Practice provides guidance for GPAI model providers on meeting their obligations, including transparency, copyright compliance, and safety evaluations. A separate Code of Practice on marking and labelling AI-generated content is under development, with a second draft published in March 2026.

When must AI regulatory sandboxes be operational?

Each EU member state must have at least one operational AI regulatory sandbox by 2 August 2026. The Digital Omnibus also proposes giving the AI Office authority to establish an EU-level sandbox.

What is the AI Pact?

Announced in September 2024, the AI Pact is a voluntary initiative where over 100 companies pledged to work toward early AI Act compliance. Signatories committed to identifying high-risk AI systems, promoting AI literacy, and preparing governance frameworks ahead of mandatory deadlines.


This article is for informational purposes only and does not constitute legal advice. Organisations should seek qualified legal counsel for jurisdiction-specific compliance guidance.