Most enterprise AI teams are treating the EU AI Act as a legal and compliance project to be handled by lawyers and risk managers. That is a mistake that is already causing production delays. The EU AI Act imposes specific technical requirements on AI systems: documentation standards, data governance requirements, human oversight mechanisms, and transparency obligations that must be built into your AI architecture, not bolted on as an afterthought by a compliance team.

This guide is written for AI practitioners and AI program leaders, not legal counsel. We cover what the Act actually requires in operational terms, how to determine which of your AI systems are affected, and the 90-day compliance sprint structure we use to bring existing production systems into compliance without shutting them down.

3x to 8x
The cost multiplier for retrofitting EU AI Act compliance into an existing production AI system versus building it in from the start. Organizations that are discovering their production models need compliance work now are learning this the hard way.

Who Is Actually Affected

The EU AI Act applies to AI systems that are placed on the EU market or put into service in the EU, regardless of where the organization is headquartered. If your AI system processes data about EU individuals, makes decisions affecting EU individuals, or is operated by an entity with EU operations, you are likely within scope.

This means that US-headquartered enterprises running AI systems that affect European customers, employees, or partners are within scope. The extraterritorial reach is similar in design to GDPR, and organizations that assumed "we are a US company so this does not apply" are discovering that assumption is incorrect.

The Four-Tier Risk Classification

The EU AI Act classifies AI systems into four risk tiers, each with different compliance obligations. Correctly classifying your AI systems is the first and most important step in any EU AI Act compliance program.

Unacceptable Risk — Prohibited
Prohibited AI Systems
AI systems that pose an unacceptable risk are prohibited entirely. This includes social scoring systems by public authorities, AI that exploits psychological vulnerabilities, real-time remote biometric identification in public spaces (with narrow exceptions), and AI systems that manipulate human behavior. If you have any systems in these categories, they cannot be deployed in the EU under any circumstances.
High Risk — Full Compliance Required
High-Risk AI Systems
High-risk systems face the most stringent requirements. This tier includes AI used in credit scoring and lending decisions, hiring and employment decisions, educational access and scoring, healthcare diagnosis and treatment decisions, critical infrastructure, law enforcement, migration and border control, and administration of justice. Most enterprise AI programs in financial services, healthcare, and HR fall here. Full documentation, conformity assessment, and human oversight required.
Limited Risk — Transparency Obligations
Limited-Risk AI Systems
Limited-risk systems face transparency obligations only. Primarily applies to chatbots and AI systems that interact with humans, AI that generates synthetic content (deepfakes, AI-generated text presented as human-authored), and emotion recognition systems. The main requirement is disclosure: users must know they are interacting with AI. Most enterprise GenAI tools fall in this tier.
Minimal Risk — No Obligation
Minimal-Risk AI Systems
Minimal-risk systems face no mandatory requirements under the EU AI Act. AI-enabled spam filters, inventory optimization, predictive maintenance (where failure does not endanger human safety), recommendation systems for non-safety-critical applications, and most internal analytics tools fall here. The vast majority of enterprise AI systems land in this tier.

What High-Risk Compliance Actually Requires

If any of your AI systems are classified as high-risk, the compliance obligations are substantial. Here is what the Act actually requires, in practical operational terms.

Requirement What It Means in Practice
Risk management system Documented, continuous risk management process for each high-risk AI system, covering known and foreseeable risks. Must be updated throughout lifecycle. Cannot be a one-time assessment.
Data governance Training, validation, and test datasets must be documented for relevance, representativeness, and freedom from errors and biases. Lineage documentation required. Protected characteristics handling must be documented.
Technical documentation Minimum 18 categories of documentation covering system purpose, development process, architecture, training data, performance metrics, risk assessment, and intended use. Must be available to market surveillance authorities on request.
Logging and audit trails Automatic logging of system operation sufficient to trace events leading to any high-risk output. Log retention periods specified by sector-specific regulation or minimum five years.
Transparency Instructions for use must allow deployers to understand system capabilities, limitations, and required human oversight. Not just "user documentation" — specific technical and operational guidance.
Human oversight Technical measures enabling humans to understand, monitor, and override or stop the system. Must be built into the system architecture, not just described in documentation.
Accuracy and robustness Performance measured and documented across performance dimensions relevant to intended use. Accuracy, robustness to errors, and cybersecurity resilience must be appropriate for the risk level.

The 90-Day EU AI Act Compliance Sprint

For organizations with existing production AI systems that need to be brought into EU AI Act compliance, we use a structured 90-day sprint. This is not a comfortable timeline for organizations with large portfolios, but it is achievable for individual systems and provides a replicable template for the rest of the portfolio.

DAYS 1 TO 14
Inventory and Classification
AI System Inventory and Risk Classification
Identify all AI systems in production or under development that may be within EU AI Act scope. For each system, apply the four-tier risk classification. Priority: focus effort on systems with EU data subjects or EU decision impact. For high-risk systems, complete an initial gap assessment against all seven compliance requirement categories. Outputs: classified inventory, prioritized compliance roadmap, initial gap register.
DAYS 15 TO 45
Documentation Build
Technical Documentation and Risk Assessment
For each high-risk system, build the 18-category technical documentation package. Conduct formal risk assessment covering known and foreseeable risks, documented failure modes, and risk mitigation measures. Document data governance practices for training, validation, and test sets. Address any data lineage gaps that prevent compliance documentation. The documentation build often reveals technical gaps that require parallel engineering work.
DAYS 46 TO 75
Technical Remediation
Human Oversight and Logging Implementation
Implement or verify human oversight mechanisms at appropriate points in the decision workflow. Deploy audit logging infrastructure meeting EU AI Act requirements (decision traceability, retention). Address any technical gaps identified in the documentation phase. Common gaps: systems that make automated high-risk decisions without a documented human review checkpoint, insufficient logging granularity, models without performance monitoring against initial metrics.
DAYS 76 TO 90
Conformity and Governance
Conformity Assessment and Ongoing Governance
Complete conformity assessment process for high-risk systems (internal assessment for most enterprise deployers; third-party assessment required for Annex I systems). Establish ongoing monitoring and lifecycle governance process. Register systems in EU AI Act database if required. Brief senior leadership and board on compliance status and ongoing obligations. Outputs: compliance documentation package, ongoing governance process, board briefing.
EU AI Act Compliance Assessment
Our AI Governance service includes EU AI Act readiness assessment, gap analysis, technical documentation support, and the 90-day remediation program. We have completed EU AI Act compliance programs at 14 enterprises in regulated industries.
View AI Governance Service →

General Purpose AI Models: What the Act Requires

A significant portion of enterprise GenAI deployments use general purpose AI models (GPAI) — the large foundation models from OpenAI, Anthropic, Google, and others. The EU AI Act introduces a separate regime for GPAI models that affects both providers and enterprise deployers.

For enterprise organizations deploying GPAI models, the key implication is that you are the deployer, and the Act assigns specific obligations to deployers that you cannot delegate entirely to your GPAI vendor. You are responsible for conducting your own risk assessment when a GPAI model is integrated into an application that falls in a high-risk category, even if the underlying model provider has fulfilled their GPAI obligations.

This means that a financial services firm using a commercial LLM for credit decision support needs to apply high-risk AI system requirements to that deployment, regardless of what the LLM provider's EU AI Act compliance posture is. The provider's compliance covers their model. Your deployment of that model in a high-risk context is your compliance obligation.

Sector-Specific Considerations

Financial services organizations face the most complex EU AI Act compliance challenge because so many of their AI systems fall squarely in the high-risk category. Credit scoring, risk assessment, loan decisions, insurance underwriting, and fraud detection systems are all explicitly high-risk. These organizations must also reconcile EU AI Act requirements with existing model risk management frameworks (SR 11-7 equivalent standards) and the forthcoming DORA AI governance requirements.

Healthcare organizations face the intersection of EU AI Act high-risk classification for clinical AI with existing EU Medical Device Regulation requirements for SaMD (Software as a Medical Device). Systems that qualify as SaMD under MDR and as high-risk AI systems under the EU AI Act face a dual compliance burden that requires coordinated regulatory strategy.

HR and employment technology organizations are discovering that systems used for CV screening, candidate ranking, and performance evaluation fall explicitly in the high-risk AI category. Many HR technology vendors are revising their product architectures in response to EU AI Act requirements, and enterprise buyers need to understand how their contracts allocate compliance obligations between vendor and deployer.

Free Research
Enterprise AI Governance Handbook — 56 Pages
The complete enterprise AI governance framework including four-tier risk classification, EU AI Act compliance roadmap with 90-day sprint, model lifecycle governance, ethics and fairness program design, and board reporting templates. Used to govern AI at regulated enterprises across financial services, healthcare, and professional services.
Download Free →
Assess Your EU AI Act Readiness
Our AI Governance service team has completed EU AI Act readiness assessments at enterprises across financial services, healthcare, and professional services. We give you a classified inventory, gap assessment, and remediation roadmap.
View AI Governance Service →
The AI Advisory Insider
Weekly intelligence on EU AI Act developments, AI governance, and regulatory compliance for enterprise AI teams. What regulators are actually looking for, based on practitioners inside regulated organizations.