The EU AI Act’s high-risk AI system rules are now enforceable - August 2, 2026 is the compliance deadline for Annex III systems. If your organization deploys AI for hiring, credit scoring, biometric identification, critical infrastructure, law enforcement, or education, you are in scope. Non-compliance carries penalties up to €35 million or 7% of global annual turnover, whichever is higher.

This isn’t just a European issue. If your AI systems affect EU citizens or are used in EU markets, these requirements likely apply regardless of where your organization is headquartered.

Here’s what you need to know - and do - now.

The Risk-Based Approach

The EU AI Act categorizes AI systems by risk level, with requirements scaling accordingly:

Unacceptable Risk (Prohibited)

Certain AI applications are banned outright:

  • Social scoring systems by governments
  • Real-time biometric identification in public spaces (with limited exceptions)
  • AI that exploits vulnerabilities of specific groups
  • Subliminal manipulation that causes harm

If your AI falls into this category, there’s no compliance path. Don’t build it.

High Risk (Heavy Regulation)

AI systems in these categories face significant requirements:

  • Critical infrastructure (energy, transport, water)
  • Education and vocational training
  • Employment and worker management
  • Essential services (credit, insurance, emergency services)
  • Law enforcement and justice
  • Migration and border control
  • Biometric identification

High-risk systems must meet requirements for data governance, documentation, transparency, human oversight, accuracy, robustness, and cybersecurity.

Limited Risk (Transparency Obligations)

Systems like chatbots and emotion recognition have transparency requirements. Users must know they’re interacting with AI.

Minimal Risk (No Specific Requirements)

Most AI applications fall here. No specific compliance requirements, though general principles apply.

Key Requirements for High-Risk Systems

If you’re deploying high-risk AI, here’s what compliance looks like:

Risk Management System

Implement a documented process to identify, analyze, estimate, and evaluate risks. This isn’t a one-time assessment. It’s ongoing throughout the system lifecycle.

Data Governance

Training, validation, and testing datasets must meet quality criteria. You need documented processes for data collection, preparation, and bias mitigation.

Technical Documentation

Comprehensive documentation that demonstrates compliance. This includes system architecture, design choices, training methodologies, and testing results.

Record Keeping

Automatic logging of system operation. Logs must be retained and accessible for compliance verification.

Transparency

Clear instructions for users, including system capabilities, limitations, and known risks. Users should understand what the AI does and doesn’t do.

Human Oversight

Design systems to allow effective human oversight. This includes the ability to understand outputs, intervene when necessary, and override or reverse decisions. Agentic workflows complicate this requirement - governance frameworks need explicit agent decision logging and override capabilities at each step of automated processes.

Accuracy, Robustness, and Cybersecurity

Systems must achieve appropriate levels of accuracy for their intended purpose. They must be resilient to errors and attacks, with cybersecurity measures appropriate to the risk.

Timeline and Enforcement

The regulation is being phased in - and the most consequential deadline is now:

  • February 2025: Prohibitions on unacceptable risk AI took effect
  • August 2025: Requirements for general-purpose AI models (GPAI)
  • August 2, 2026: Full requirements for high-risk AI systems (Annex III) - this is the current enforcement deadline
  • August 2027: Requirements for high-risk AI systems listed in Annex I (safety components in existing regulated products)

Penalties for non-compliance reach up to 7% of global annual turnover or €35 million for the most serious violations, and up to 3% (€15M) for other infringements.

A note on the Digital Omnibus: The European Commission’s Digital Omnibus proposal (November 2025) would extend certain high-risk enforcement deadlines to December 2, 2027 - but only if linked harmonized standards aren’t available in time. This is not a blanket postponement. Organizations should treat August 2026 as the operative deadline and treat any extension as a potential reprieve, not a planning assumption.

Practical Steps to Prepare

Compliance isn’t achieved overnight. Start now:

1. Inventory and Classify Your AI Systems

Document all AI systems you develop, deploy, or use. Classify each against the Act’s risk categories - particularly Annex III’s high-risk categories. This classification review is the prerequisite for everything else.

2. Gap Analysis

For high-risk systems, compare current practices against Act requirements. Identify gaps in documentation, governance, and technical controls.

3. Governance Framework

Establish or update your AI governance framework. Define roles, responsibilities, and processes for AI development and deployment.

4. Build Your Compliance Deliverables

High-risk systems require three core deliverables before August 2026: a control catalog (mapping your controls to Act requirements), a compliance matrix (demonstrating which requirements are met), and an AI risk register (ongoing documentation of identified risks and mitigations). Technical documentation, conformity assessment records, and EU database registration complete the picture.

5. Training

Ensure your teams understand the requirements. This includes developers, data scientists, compliance staff, and leadership.

6. Vendor Assessment

If you use third-party AI systems, assess their compliance. Your obligations don’t disappear because you’re using someone else’s technology.

The Broader Governance Context

The EU AI Act doesn’t exist in isolation. It’s part of a broader movement toward AI governance:

NIST AI Risk Management Framework provides complementary guidance for organizations regardless of EU exposure.

ISO 42001 offers a management system standard for AI that aligns with regulatory requirements.

Industry-specific regulations may add additional requirements depending on your sector.

Smart organizations are building governance frameworks that address multiple standards and regulations, not just point compliance with single requirements.

Moving Forward

Compliance with the EU AI Act isn’t optional if you’re affected. But it’s also an opportunity. Organizations that build robust governance now will be better positioned for:

  • Regulatory requirements in other jurisdictions
  • Customer and stakeholder trust
  • Reduced risk from AI failures
  • Competitive advantage in regulated markets

The investment in compliance today is an investment in sustainable AI adoption.

Need help assessing your compliance position? Our governance team can help you inventory systems, identify gaps, and build a roadmap to compliance. Use the contact form to discuss your situation.