February 28, 2026 10 mins read

The EU Digital Omnibus Explained: What It Means for EU AI Act Enforcement Dates in 2026

Last updated: February 28, 2026

The EU AI Act enforcement timeline could shift. The European Commission’s Digital Omnibus proposal could push high-risk AI obligations from August 2026 to as late as December 2027. But here’s the catch — it hasn’t passed yet, and the original deadlines still stand. This guide breaks down exactly what’s changing, what’s not, and what your company should do right now.


What Is the EU Digital Omnibus?

The Digital Omnibus is a legislative package published by the European Commission on November 19, 2025, designed to simplify and streamline the EU’s digital regulatory framework. The name comes from the Latin omnibus, meaning “for all” — signalling a holistic review across multiple digital laws simultaneously.

It covers amendments to the EU AI Act, GDPR, ePrivacy Directive, Data Act, and cybersecurity legislation (NIS2, DORA), with the stated goal of reducing administrative burdens by at least 25% for all businesses and 35% for SMEs by 2029.

For companies subject to the EU AI Act, the Digital Omnibus proposes the most significant changes to the compliance timeline since the regulation was adopted.


Why Was the Digital Omnibus Proposed?

The EU AI Act entered into force on August 1, 2024, with obligations phasing in over several years. But by late 2025, it became clear that the compliance ecosystem wasn’t ready.

Three critical problems emerged:

Harmonised standards weren’t delivered on time. The European standardisation bodies (CEN and CENELEC) missed their 2025 deadline to produce the technical standards companies need to demonstrate compliance. They’re now targeting the end of 2026.

Many Member States hadn’t designated competent authorities. National market surveillance authorities — the bodies responsible for enforcing the AI Act — were slow to be appointed, creating enforcement gaps.

Guidance was arriving too late. The Commission itself missed deadlines for publishing implementing guidance on high-risk system classification, creating uncertainty about what compliance actually requires.

The result: companies were being asked to comply with rules that hadn’t yet been fully specified, using standards that didn’t yet exist, enforced by authorities that hadn’t yet been appointed.


What Does the Digital Omnibus Change for the EU AI Act?

High-Risk AI Deadlines Are Being Pushed Back

The original AI Act timeline required high-risk AI system obligations to apply from August 2, 2026. The Digital Omnibus proposes a conditional delay — meaning enforcement is tied to the availability of compliance support tools rather than a fixed calendar date.

Here’s how the new timeline works:

AI System CategoryOriginal DeadlineProposed New DeadlineBackstop (Latest Possible)
High-risk AI systems (Annex III) — e.g., education, employment, credit scoring, law enforcementAugust 2, 20266 months after Commission confirms standards are availableDecember 2, 2027
High-risk AI systems (Annex I) — embedded in regulated products like medical devices, vehicles, machineryAugust 2, 202712 months after Commission confirms standards are availableAugust 2, 2028
GPAI systems placed on market before Aug 2026 — documentation and governance updatesAugust 2, 2026February 2, 2027February 2, 2027
AI-generated content marking (Art. 50) — watermarking and metadata obligationsAugust 2, 2026February 2, 2027February 2, 2027
Legacy public sector AI systemsAugust 2, 2027August 2, 2030August 2, 2030

What Else Is Changing?

Beyond the timeline shifts, the Digital Omnibus proposes several other significant amendments:

Simplified QMS for SMEs. Access to simplified quality management system requirements (Article 17) — previously available only to microenterprises — is now extended to all small and medium-sized enterprises. This directly reduces the compliance burden for startups and scaleups.

Small mid-cap relief. Companies with up to 750 employees or €150 million turnover will benefit from streamlined technical documentation requirements and more favourable fine calculations.

Reduced registration burden. AI systems that providers self-assess as not high-risk will no longer require mandatory registration in the EU high-risk AI database.

AI literacy obligation shift. The responsibility for ensuring AI literacy would move from providers and deployers to the Commission and Member States — although some training requirements for deployers of high-risk systems remain.

Special category data for bias detection. Providers would be explicitly permitted to process special category personal data (e.g., race, disability) to detect and correct bias, subject to safeguards. This addresses a long-standing tension between the AI Act’s bias requirements and GDPR restrictions.

Centralised AI Office supervision. The AI Office would gain exclusive competence for AI systems based on general-purpose AI models, as well as AI systems integrated into very large online platforms under the DSA.

EU-level regulatory sandbox. A new EU-wide sandbox would be established alongside national sandboxes, specifically for general-purpose AI model developers.

Dual classification clarification. When a high-risk AI system falls under both the AI Act and sector-specific product regulation (e.g., medical devices), providers should follow the sectoral conformity assessment procedure rather than attempting dual compliance.


What Is NOT Changing?

This is equally important. The Digital Omnibus does not touch:

Prohibited AI practices — already enforceable since February 2, 2025. Social scoring, manipulative AI, workplace and education emotion recognition bans remain fully in force. Penalties: up to €35 million or 7% of global turnover.

GPAI model obligations — applicable since August 2, 2025 for providers of general-purpose AI models (documentation, copyright compliance, training data summaries).

The AI Act’s fundamental structure — the risk-based classification system, the core requirements for high-risk systems, and the enforcement penalty framework remain intact.

Transparency obligations for chatbots and deepfakes — the requirement to inform users they’re interacting with AI remains, though the machine-readable content marking deadline gets a six-month extension.


Where Does the Digital Omnibus Stand Now? (February 2026)

The proposal is currently in the ordinary legislative procedure in the European Parliament and Council. Here’s the latest:

Parliament activity (February 2026): The co-rapporteurs published their draft report with amendments. Key proposed changes include reinstating AI literacy obligations for providers and deployers, and pushing to definitively delay high-risk obligations until December 2, 2027 at the earliest.

Political groups have tabled their own amendments — with the Socialists and Democrats (S&D) and Greens pushing to reject several controversial provisions, while the European People’s Party (EPP) is seeking to broaden changes further.

Council position: Being negotiated in parallel. Several Member States reportedly support a ban on AI-generated non-consensual intimate imagery as a new prohibited practice.

Expected timeline for adoption:

PhaseExpected Timing
Committee reports and Parliament plenary voteQ2 2026
Council position agreedQ2 2026
Trilogue negotiations (Commission, Parliament, Council)Q2–Q3 2026
Final adoption (standard procedure)Mid to Q3 2026
Final adoption (urgent procedure, if applied)Potentially Q1–Q2 2026
Entry into force3 days after Official Journal publication

The critical uncertainty: The legislative timeline is extremely tight. If the Digital Omnibus is not adopted before August 2, 2026, the original AI Act deadlines apply as written. Companies would need to comply with high-risk obligations using incomplete standards and limited guidance. The risk is no longer hypothetical.


FAQ: Digital Omnibus and EU AI Act Enforcement

Does the Digital Omnibus delay the EU AI Act?

Not entirely. It proposes conditional delays for specific obligations, primarily, high-risk AI system requirements. Prohibited practices, GPAI obligations, and transparency requirements are not delayed. The proposal is still being negotiated and has not been adopted.

Can I stop preparing for August 2026 compliance?

No. The Digital Omnibus is a proposal, not law. Until it’s formally adopted, the original August 2, 2026 deadline for high-risk AI obligations remains legally binding. Companies that pause compliance work are gambling on legislative timing they cannot control.

What happens if the Omnibus doesn’t pass before August 2026?

High-risk AI system requirements apply as originally drafted. Companies must comply even without finalised harmonised standards. Enforcement may be uneven across Member States, but the legal obligation exists.

Does this affect my GPAI model compliance?

GPAI model obligations have been applicable since August 2, 2025. The Omnibus gives providers of GPAI systems placed on the market before August 2026 until February 2027 to update documentation and governance. But core GPAI obligations are already in force.

Is the AI Act being weakened?

The Commission has explicitly rejected characterisations of the Omnibus as deregulation, calling it a “structural recalibration.” The core framework — risk classification, prohibited practices, high-risk requirements, enforcement penalties — remains intact. However, critics, including former Parliament AI Act negotiators, argue that the delays undermine confidence in the regulation and create uncertainty.

Will SMEs still need to comply with the full AI Act?

SMEs benefit from simplified QMS requirements and streamlined technical documentation. But if an SME provides or deploys a high-risk AI system, the core obligations still apply — risk management, data governance, human oversight, transparency, and post-market monitoring.

What about enforcement — will regulators actually fine companies?

The penalty framework is unchanged: up to €35 million or 7% of global annual turnover for prohibited practice violations, and up to €15 million or 3% of global turnover for high-risk system non-compliance. Multiple investigations into prohibited practices (workplace emotion recognition, social scoring) are already underway, though no public penalties have been announced yet.


Key Definitions to Know

TermDefinition
Digital OmnibusA legislative package proposed by the European Commission on November 19, 2025, amending the EU AI Act, GDPR, and other digital laws to reduce regulatory burden and simplify compliance.
High-risk AI system (Annex III)AI systems used in specific domains listed in Annex III of the AI Act — including education, employment, credit scoring, law enforcement, migration, and biometrics — subject to strict compliance requirements.
High-risk AI system (Annex I)AI systems that are safety components of, or embedded in, regulated products covered by EU product safety legislation (medical devices, vehicles, machinery, toys, etc.).
Harmonised standardsTechnical standards developed by European standardisation bodies (CEN/CENELEC) that, when followed, create a presumption of conformity with the AI Act’s requirements.
Conformity assessmentThe process by which a provider demonstrates that a high-risk AI system meets the AI Act’s requirements — either through self-assessment or third-party evaluation by a notified body.
GPAI (General-Purpose AI)AI models trained on broad data that can perform a wide range of tasks — such as large language models. Subject to specific obligations under the AI Act.
AI OfficeThe European Commission body responsible for overseeing AI Act implementation, with proposed expanded supervisory and enforcement powers under the Digital Omnibus.
Backstop dateThe latest possible date by which obligations will apply, regardless of whether compliance support measures (standards, guidance) are available.
Regulatory sandboxA controlled environment supervised by a competent authority where companies can develop, train, and test AI systems under regulatory guidance before market placement.
Stop-the-clock mechanismThe Digital Omnibus approach of linking enforcement dates to the availability of compliance infrastructure rather than fixed calendar dates.

What Should Companies Do Right Now?

The Digital Omnibus creates uncertainty, but it doesn’t create an excuse. Here’s the practical guidance:

1. Keep preparing for August 2026. Until the Omnibus is adopted, it’s the legal deadline. If it passes and you’re already compliant — you’re ahead of the market. If it doesn’t pass and you weren’t preparing — you have a problem.

2. Prioritise prohibited practices compliance. These obligations are already enforceable. If your company uses emotion recognition in workplaces or educational settings, social scoring, or manipulative AI techniques — stop now. The fines are the highest in the AI Act.

3. Start your risk classification. Whether enforcement happens in August 2026 or December 2027, you need to know which of your AI systems are high-risk. This doesn’t change regardless of timeline shifts.

4. Build your quality management system. The Digital Omnibus simplifies QMS for SMEs, but it doesn’t eliminate it. Start with the framework now and adapt when final standards are published.

5. Monitor the legislative process closely. Track the Parliament committee votes, Council position, and trilogue negotiations. The final text will differ from the current proposal.

6. Use the extra time wisely — don’t waste it. If the Omnibus passes, you gain months, not years. The backstop deadline of December 2, 2027 is not far away. Companies that use the extension to build robust compliance will be in the strongest position. Book a demo today!


The Bottom Line

The Digital Omnibus is the EU’s acknowledgement that the AI Act’s implementation ecosystem wasn’t ready for the original timeline. The proposed delays are conditional, not guaranteed, and the legislation hasn’t been adopted yet.

For companies navigating AI Act compliance, the message is clear: prepare as if August 2026 is real, plan as if December 2027 is the likely enforcement date, and build systems that work regardless of which date applies.

The companies that treat this as a reprieve will fall behind. The companies that treat it as an opportunity to get compliance right will lead.


eyreACT helps companies automate EU AI Act compliance through living compliance binders, automated evidence collection, and continuous monitoring. Whether enforcement hits in August 2026 or December 2027 — we make sure you’re ready.