The European Commission’s Digital Omnibus proposal landed in November 2025 with a headline that caught many compliance teams’ attention: a potential delay to EU AI Act high-risk enforcement timelines. If you’ve been banking on August 2, 2026 as your planning deadline, the Omnibus has introduced genuine uncertainty - but not in the way most summaries suggest.
Here’s what the proposal actually does, what it doesn’t do, and why your compliance program shouldn’t treat it as breathing room.
What the Digital Omnibus Actually Proposes
The Digital Omnibus on AI is a European Commission proposal that would amend the EU AI Act’s application timeline for high-risk AI systems. Rather than fixed calendar dates, the Omnibus would link enforcement to the availability of compliance support tools - primarily harmonized standards developed by CEN-CENELEC’s Joint Technical Committee 21 (JTC 21).
Under the proposal:
- High-risk AI systems in Annex III (the category covering hiring algorithms, credit scoring, biometrics, and essential services) would face a backstop enforcement date of December 2, 2027 - but only if harmonized standards aren’t confirmed available before then
- High-risk AI systems in Annex I (safety components in products already regulated under EU sectoral law) would have until August 2, 2028 under the same logic
- Rules would apply six months after the Commission confirms Annex III-relevant standards are available, or twelve months for Annex I systems
The key phrase is “if the Commission confirms sufficient compliance support measures are in place.” That confirmation could come before December 2027, which would pull the enforcement date forward accordingly. The December 2027 date is a ceiling, not a guaranteed floor.
What’s Driving the Extension
JTC 21 is responsible for developing the harmonized standards that will give organizations a practical compliance pathway for high-risk AI requirements. The problem: full standards are not expected to be finalized before late 2026 at the earliest, and the Commission needs to formally confirm their sufficiency before the compliance clock starts.
The Omnibus acknowledges what industry has been saying for months - that building the compliance infrastructure required under the Act (quality management systems, risk registers, technical documentation, conformity assessments, EU database registrations) without clear harmonized standards to reference is genuinely difficult. Organizations can’t certify against a moving target.
This is a reasonable concern. But the Omnibus addresses it by creating a conditional extension, not by removing the obligation.
What the Omnibus Does Not Do
This is where most coverage gets imprecise. The Digital Omnibus proposal:
- Does not suspend the EU AI Act’s existing enforcement provisions
- Does not delay the prohibitions on unacceptable-risk AI (those took effect February 2025)
- Does not remove the August 2026 date as a planning reference - it remains the baseline if standards arrive on schedule
- Is not yet law - it is a Commission proposal that must pass through Council and Parliament, a process that could result in amendments
More importantly, the Omnibus does not change what organizations need to build internally. The underlying compliance requirements - classifying systems, implementing data governance, establishing human oversight mechanisms, creating risk registers - are framework-independent. They’re required whether your deadline is August 2026 or December 2027.
The ETSI Standards Dependency
A separate but related development: ETSI EN 304 223 - Europe’s new baseline for AI cybersecurity - came into force in early 2026. This standard sits alongside the Act’s requirements for high-risk systems and is worth understanding independently of the Omnibus timeline.
For organizations building AI governance programs, ETSI EN 304 223 provides concrete cybersecurity requirements that complement the Act’s broader risk management obligations. Unlike the still-in-progress JTC 21 harmonized standards, ETSI EN 304 223 is available now and maps directly to the kind of AI security controls your high-risk systems need to demonstrate.
Organizations waiting for JTC 21 standards before starting compliance work can act on ETSI EN 304 223 today.
Why August 2026 Should Remain Your Planning Assumption
Three reasons to keep your timeline unchanged:
The extension is not guaranteed. The Digital Omnibus must survive Council and Parliament, both of which may modify the proposal. Political appetite for further AI Act delays is not uniform across member states. Treating a proposal as settled law is a compliance risk in itself.
Early standards availability is possible. The six-month trigger mechanism means that if CEN-CENELEC delivers key standards in mid-2026, Annex III enforcement could begin before December 2027. You wouldn’t know until the Commission makes its determination.
Compliance infrastructure takes time regardless of deadline. The organizations that struggle with EU AI Act compliance won’t be the ones who started in August 2026. They’ll be the ones who started in September 2026. Quality management systems, risk registers, technical documentation packages, and conformity assessment processes are multi-month builds. The Omnibus doesn’t compress that timeline.
What You Should Be Doing Now
Regardless of how the Omnibus resolves, the practical steps for high-risk AI compliance are clear:
Classify your AI inventory. Determine which of your systems fall under Annex III categories: biometrics, employment and worker management, credit and insurance, critical infrastructure, education, law enforcement, migration, and administration of justice. This classification drives everything else. Systems that don’t qualify as high-risk have significantly lighter obligations.
Build the three core compliance deliverables. The EU AI Act’s enterprise compliance model requires a control catalog (mapping your controls to Act requirements), a compliance matrix (demonstrating coverage), and an AI risk register (living documentation of identified risks and mitigations). These are the foundation for any conformity assessment.
Start technical documentation now. For each high-risk system, technical documentation must cover system architecture, design choices, training methodology, validation approach, and testing results. This is not a documentation sprint you can complete in a week.
Understand your vendor obligations. If you’re deploying third-party AI systems in high-risk categories, your compliance obligations don’t disappear because someone else built the model. Vendor risk assessment under the Act is your responsibility.
Map to existing frameworks. NIST AI RMF, ISO 42001, and ETSI EN 304 223 provide structured approaches that align with Act requirements. Organizations already working within these frameworks have a significant compliance head start.
The Bigger Picture
The Digital Omnibus reflects a genuine tension at the heart of EU AI Act implementation: ambitious regulatory ambition running ahead of the standardization infrastructure needed to operationalize it. That tension is real, and the Commission deserves credit for acknowledging it.
But the response to regulatory uncertainty isn’t inaction - it’s building the compliance foundation that will be required regardless of the final timeline. Organizations that use the Omnibus as a reason to delay are taking on legal exposure in exchange for a few months of saved effort. That’s rarely a good trade.
If you’re trying to understand your organization’s exposure under the Act - including whether your systems qualify as high-risk - the contact form is the right starting point. Our governance team works through classification reviews and compliance gap analyses regularly.
Sources: