The Four Enforcement Waves Are Real, and the Clock Is Running

The EU AI Act entered into force August 2024. Most enterprises missed this because regulators focused on AI literacy first. But enforcement reality hits in three waves, and the timeline is non-negotiable. If you're operating AI systems in the EU (or serving EU customers), here's what actually happens when:

Enforcement Wave Date What Goes Live Impact
Wave 1: Prohibited Uses August 2025 Ban on six categories of AI use (mass surveillance, emotion recognition, social scoring, real-time biometric identification, etc.) Fines up to 6% of global annual turnover. Non-negotiable. Any deployment of these systems becomes illegal.
Wave 2: High-Risk Requirements August 2026 24 categories of high-risk AI systems must meet 18 mandatory compliance requirements Operational compliance now mandatory. Risk management, data governance, technical documentation, human oversight, transparency disclosures. Non-compliance penalties: up to 10 million EUR or 2% of global turnover.
Wave 3: Foundation Models (GPAI) Ongoing (phased by model size) Models above 10^25 FLOPs require transparency, copyright documentation, and abuse monitoring Transparency requirements for deployment. If you're using GPT-4 scale models, this applies to you.
Wave 4: General Compliance Culture Ongoing Low-risk systems subject to transparency, documentation, and record-keeping obligations Administrative burden. Not high penalty risk, but audit-time and resource intensive.

That's 17 months from now for high-risk system compliance. For reference: 94% of financial services institutions, 87% of healthcare systems, and 72% of insurance firms in scope are under-prepared.

First Question to Answer: What Is Your Role in This?

The EU AI Act defines three roles. Your compliance obligations depend entirely on which one(s) you occupy. Many enterprises occupy multiple roles simultaneously. Know which applies to you before building your compliance roadmap.

Provider
You develop, train, or market an AI system for release into the EU market (including cloud APIs). You bear the largest compliance burden: risk assessment, technical documentation, post-market monitoring, incident reporting.
Deployer
You operate an AI system in the EU for your own purposes (internal or customer-facing). Deployers must ensure providers are compliant, monitor outputs, maintain audit trails, and report incidents. You inherit provider obligations if provider abandons compliance.
Distributor / Re-seller
You resell or re-distribute AI systems. You must verify provider compliance, include provider documentation, and maintain compliance records. Smaller compliance burden than provider but still material.

Most enterprises are deployers, not providers. But if you're building AI models internally, training custom models on proprietary data, or releasing AI services to other organizations, you are also a provider. The compliance requirements are additive: deployers + providers = full stack compliance.

Identifying High-Risk Systems: The 24 Categories That Trigger Mandatory Compliance

Not all AI systems are "high-risk" under the EU AI Act. Prohibited systems (six categories) are outright banned. General-purpose systems and low-risk uses have lighter compliance requirements. But 24 categories trigger mandatory, heavy-duty compliance. These systems require risk management systems, technical documentation, human oversight protocols, and continuous monitoring. Identifying them correctly is your first operational task.

Biometric identification and categorization
Real-time or post-hoc biometric identification, emotion recognition, age/gender/ethnicity classification
Critical infrastructure protection
AI systems that control energy, water, transport, telecommunications, waste management
Education and vocational training
AI determining student placement, educational recommendations, or access to educational content
Employment and labor relations
AI for recruitment, promotion decisions, monitoring, termination recommendations, or performance evaluation
Essential services access
AI determining access to housing, healthcare, social services, or credit
Law enforcement and border control
AI for criminal intent assessment, polygraph interpretation, border crossing decisions, travel document validity
Administration of justice and democratic processes
AI assisting in legal decisions, parole assessments, or electoral process management
Financial and credit systems
Credit scoring, loan decisions, insurance underwriting, securities trading, bank account opening
Healthcare and medical devices
AI for patient risk assessment, diagnosis support, treatment planning, or mental health evaluation (Annex III medical devices)
Workplace monitoring and management
Continuous employee monitoring, activity tracking, performance assessment systems
Migration and asylum processing
AI determining immigration status, asylum eligibility, or deportation assessments
Environment and natural resources
AI managing water distribution, wildlife conservation decisions, or environmental compliance
Beneficiary determination
AI determining eligibility for social welfare, unemployment, disability benefits, or parental leave
Consumer protection and pricing
AI determining individual pricing, product recommendations with financial impact, or contract terms
Public safety and security
Predictive policing, crowd management, or threat assessment systems
Autonomous weapons and military systems
AI controlling weapons, targeting systems, or military decisions without human intervention

Your audit task is simple: inventory all AI systems in your organization and ask, "Does it fit one of these 24 categories?" If yes, mark it high-risk. If it's decision-supporting (suggestion only, human makes final call) rather than decision-making, risk is lower. But the burden of proof falls on you.

Need clarity on your high-risk systems?

Our AI inventory assessment tool helps you classify systems and identify which compliance obligations apply to your specific systems. No guesswork, no vendor bias.

Start Assessment →

What High-Risk Compliance Actually Requires: The 18 Mandatory Requirements in Practice

Once you identify high-risk systems, you must implement 18 mandatory requirements. These aren't optional. They're the difference between "compliant" and "facing penalties." Here's what actually has to be done, condensed into operational requirements:

Critical
1. Risk Management System
Establish documented procedures for identifying, analyzing, and mitigating risks. Must cover reasonably foreseeable misuse, errors, and adversarial inputs. Review and update minimum yearly or when risks change. Assign clear ownership.
Critical
2. Data Governance and Quality Standards
Document training and testing datasets. Define quality standards, filters for bias/errors, testing protocols. Maintain audit trail of dataset changes. This applies to all datasets, including third-party sources.
Critical
3. Technical Documentation
Create and maintain comprehensive technical documentation: system architecture, training methodology, performance metrics, known limitations, testing procedures, intended use, human oversight mechanisms. Must be detailed enough for independent assessment.
Critical
4. Logging and Record-Keeping
Maintain logs of high-risk system operation: inputs, outputs, decisions made, dates/times, and relevant metadata. Logs must be auditable and retained for six months minimum (some jurisdictions require longer). This is your evidence trail.
Critical
5. Transparency Information for Users
Inform users when they're interacting with an AI system. Explain what the system does, its limitations, and how to escalate to human review. Provide this information before or at the moment of interaction.
Critical
6. Human Oversight and Intervention
Design systems so humans can understand and override outputs. Train human reviewers, establish escalation procedures, and ensure humans can reverse decisions. For employment and credit decisions, this is non-negotiable.
Important
7. Accuracy and Robustness Testing
Test systems for performance, bias, and failure modes. Document testing methodology and results. Establish metrics that matter (accuracy alone isn't enough; you need fairness and robustness metrics). Test regularly post-deployment.
Important
8. Cybersecurity and Adversarial Robustness
Implement security controls to prevent manipulation and attacks. Document vulnerability assessment, penetration testing, and security monitoring. High-risk systems are targets for adversarial manipulation.
Important
9. Incident Reporting and Management
Establish incident response procedures. Document serious incidents (harm to individuals, system failures, security breaches). Report serious incidents to EU authorities within 15 days. Keep records for investigation.
Deep Dive Available
Enterprise AI Governance Handbook
Our comprehensive white paper covers all 18 requirements with checklists, template documentation, governance structure design, and how to integrate compliance into model development workflows. Used by 200+ enterprise compliance teams.
Read the Handbook →

The 90-Day Sprint: Your High-Risk System Compliance Timeline

Compliance doesn't happen overnight, but it needs to happen in 90 days to stay ahead of August 2026 deadlines. Here's the specific week-by-week sprint that works:

Weeks 1-2: Audit and Classify

Inventory all AI systems. Classify by risk tier. Identify high-risk systems. Document current state (what documentation exists, what testing has been done, who owns each system). Output: high-risk system registry with 2-3 page summaries for each.

Weeks 3-4: Risk Assessment Deep Dive

For each high-risk system, conduct formal risk assessment. Identify failure modes, potential misuse, accuracy gaps, bias vectors. Document controls already in place and gaps. Output: risk assessment report per system with prioritized gap list.

Weeks 5-8: Documentation and Process Build

Develop technical documentation templates and complete them for all high-risk systems. Design and document logging infrastructure. Build transparency notices. Create human oversight procedures and escalation protocols. Output: complete technical documentation, logging implementation, and SOPs for human review.

Weeks 9-10: Testing and Validation

Execute testing protocols: accuracy/robustness testing, bias assessment, adversarial testing, security assessment. Update risk assessments based on test results. Output: testing reports with metrics and remediation priorities.

Weeks 11-12: Remediation and Sign-Off

Address critical gaps identified in testing. Implement human oversight and incident reporting procedures. Conduct compliance review with legal/compliance teams. Build training for model owners and human reviewers. Output: final compliance sign-off, training completion, incident response procedures live.

This sprint assumes you have 2-3 high-risk systems and one dedicated compliance person. If you have 10+ systems, multiply timelines by 3-5x or build a compliance team.

GPAI Obligations: Do You Deploy Foundation Models?

General-purpose AI models (GPAI) are large foundation models like GPT-4, Claude, Llama, and others. If a model's training compute exceeds 10^25 FLOPs (floating point operations), it's GPAI under the EU AI Act. If you're deploying these models, you have specific obligations:

  • Transparency Register: Maintain model cards describing architecture, training data, capabilities, limitations, and intended uses. Make available to regulators upon request.
  • Copyright Documentation: If the model was trained on copyrighted content, document it. This is an emerging compliance area with ongoing legal uncertainty, but documentation is mandatory.
  • Abuse Monitoring: Implement procedures to detect and report misuse of GPAI models, including jailbreak attempts and harmful applications.
  • Downstream Risk Assessment: Assess how GPAI models are used in your high-risk systems. If a GPAI model powers a high-risk application, both GPAI obligations and high-risk obligations apply.

If you're using Claude, GPT-4, or similar models in production, you already have deployment obligations under the EU AI Act. This is not optional based on your vendor's compliance status; deployers share responsibility.

Sector-Specific Practical Guidance

Financial Services: Credit Scoring, Insurance, Securities

Financial services are heavily regulated under the EU AI Act. Most financial AI systems (credit scoring, insurance underwriting, securities trading, fraud detection above threshold) are high-risk. Requirements:

  • Credit scoring and loan decisions: Must include human review override and transparency to applicants on decision factors.
  • Insurance underwriting: Bias testing mandatory. Gender-based pricing is banned; test for proxy discrimination (zip codes, education level correlating with protected characteristics).
  • Securities trading and market manipulation detection: Require robust adversarial testing and audit trails.
  • Fraud detection: Lower risk if decision-supporting (alerts human reviewers) rather than auto-blocking. If auto-blocking, high-risk requirements apply.

Healthcare: Medical Devices and Patient Risk Assessment

Healthcare AI systems under Annex III (medical devices) have dual compliance: EU AI Act high-risk requirements and medical device regulation (MDCG guidance). Key overlaps:

  • Diagnostic support systems (CAD, risk prediction): High-risk. Must include training for clinicians, documented performance on diverse patient populations, and human override capability.
  • Clinical decision support: If purely informational (supports human decision), lower risk than if system auto-determines treatment pathway.
  • Mental health and psychological assessment: Specifically high-risk. Require explainability and human clinician review.
  • Patient data governance: GDPR plus EU AI Act requirements overlap. You need data processing agreements, impact assessments, and bias monitoring.

Three Actions to Start Today

Action 1: Inventory Your AI Systems. List all AI systems in your organization. Categorize by risk (prohibited, high-risk, low-risk). Assign owner to each. This takes 2-4 weeks for most enterprises and is non-negotiable as your starting point.

Action 2: Assign Compliance Ownership. Designate a compliance lead or team. Budget resources (this is not a part-time project). Define reporting structure to leadership. Build cross-functional governance with legal, product, engineering, and operations.

Action 3: Begin Technical Documentation. For your highest-risk systems, start documenting architecture, training data, testing methodology, and known limitations. Don't wait for perfect documentation; start the process and iterate.

Build governance that actually works
Our advisors have designed AI governance programs at 200+ enterprises. Vendor-neutral frameworks, not vendor-funded compliance theater.
Start Free Assessment →
The AI Advisory Insider
Weekly intelligence on enterprise AI governance, regulatory updates, and production case studies. No vendor sponsorship.