AI regulation moved from policy discussion to enforceable obligation in 2026. The EU AI Act is past its transitional period for prohibited practices and high-risk system requirements. The United States has a fragmented but expanding federal and state-level regulatory picture. The United Kingdom, Singapore, Canada, and Brazil have all advanced their AI governance frameworks materially. For multinational enterprises, the compliance picture is now genuinely complex and the cost of ignoring it is rising.

The objective of this article is not to provide legal advice but to give enterprise leaders the strategic context needed to understand what the regulatory landscape requires, which regimes are most immediately material to their operations, and how to build a compliance approach that does not paralyse innovation. Compliance and effective AI deployment are not opposites. The organizations handling this best are those that treat regulatory requirements as a forcing function for governance practices they should be building regardless.

3x to 8x
the cost of retrofitting AI compliance into existing systems versus building it in from the start, based on our work with 40 plus enterprises across regulated industries. The organizations that are treating AI governance as a foundational investment rather than a late-stage compliance task are accumulating a significant structural cost advantage.

The EU AI Act: What Is Now Enforceable

The EU AI Act has moved past transition periods for its most critical provisions. Enterprises with operations or customers in the European Union need to understand their obligations with specificity, not broad awareness.

The prohibited practices provisions have been in effect since August 2024. These include social scoring systems by public authorities, real-time remote biometric identification in public spaces (with narrow law enforcement exceptions), AI that exploits psychological vulnerabilities or uses subliminal techniques, and AI that classifies people based on sensitive characteristics. For most commercial enterprises, prohibited practice obligations are met by confirming that no deployed or planned systems fall into these categories.

The high-risk system requirements are the primary compliance burden for commercial enterprises. Systems used in employment decisions (hiring, promotion, performance monitoring), credit scoring, insurance risk assessment, access to essential services, education, and several other categories are classified as high-risk under Annex III. High-risk systems require conformity assessment, technical documentation, data governance requirements, human oversight design, accuracy and robustness standards, and registration in the EU AI database. Most enterprises in financial services, healthcare, and HR functions have at least some systems in scope.

General Purpose AI (GPAI) model providers face separate obligations from August 2025 onwards, with systemic risk GPAI models (those trained on more than 10 to the 25th FLOPS) facing enhanced requirements including adversarial testing and incident reporting. This primarily affects foundation model providers rather than enterprise deployers, though enterprises that fine-tune GPAI models on their own data may have obligations that require analysis.

Concerned about EU AI Act compliance?
Our AI governance team has run EU AI Act compliance readiness assessments for over 30 enterprises across financial services, healthcare, and retail. Independent analysis with no vendor relationships to protect.
Explore AI Governance →

The Global Regulatory Landscape by Region

European Union
Actively Enforced
EU AI Act is the most comprehensive binding AI regulation globally. Prohibited practices fully in force. High-risk system obligations applying to Annex III use cases with conformity assessment, documentation, and registration requirements. GPAI model obligations active. Penalties up to 35 million EUR or 7 percent of global annual turnover. National competent authority oversight frameworks being established in major EU member states. Enterprises with any EU nexus (users, operations, or data subjects) should treat this as their primary compliance framework.
United States
Sector-Specific + State Level
Fragmented federal picture with sector-specific guidance rather than omnibus AI legislation. Financial services under existing model risk management expectations (SR 11-7) plus emerging OCC and Fed guidance on AI in banking. Healthcare under HIPAA obligations for AI systems processing protected health information, plus FDA oversight for software as a medical device. Equal employment opportunity obligations applied to AI hiring tools. State-level: Colorado AI Act (consumer protections for consequential decisions), California CPRA AI-related provisions, Illinois AEIA, and similar. Enterprise compliance requires mapping business functions to applicable sectoral frameworks.
United Kingdom
Pro-Innovation Framework Active
Context-based approach through existing regulators rather than a single AI Act. The AI Safety Institute focuses on frontier model risk. The ICO has issued AI and data protection guidance with enforcement implications under UK GDPR. The FCA and PRA have issued AI model risk management guidance for financial services. The UK approach is deliberately less prescriptive than the EU AI Act and is often cited by enterprises as more innovation-friendly, though compliance requirements under existing sector regulators are real and enforced.
China
Multiple Regulations Active
Comprehensive and actively enforced framework including the Generative AI Service Measures (in force July 2023), Deep Synthesis Provisions governing synthetic media, and the broader personal information protection framework. Enterprises deploying AI in China face content governance requirements, security assessments for generative AI services, and requirements for labelling of AI-generated content. The regulatory posture is distinct from Western approaches and requires separate compliance architecture for China-facing operations.
Singapore
Framework Established
Model AI Governance Framework (second edition), the AI Verify testing toolkit, and the Project Moonshot safety initiative provide a well-developed voluntary framework that many enterprises treat as best-practice guidance. Financial services supervision by MAS includes AI model risk expectations aligned with global standards. Singapore's approach has been influential in shaping ASEAN region thinking and is increasingly referenced in enterprise AI governance standards in the region.

The Enterprise EU AI Act Compliance Timeline

For enterprises operating in the EU or with EU-exposed operations, the compliance timeline has specific dates that should be driving current program activity.

May 16, 2025

Prohibited Practices — In Force

All prohibited AI practices (social scoring, real-time biometric ID, subliminal manipulation, vulnerability exploitation, certain biometric categorisation) are illegal in the EU. Any systems potentially in scope should have been assessed and confirmed out of scope or discontinued.

May 16, 2025

GPAI and Governance Obligations — In Force

General Purpose AI model obligations active. AI literacy obligations for all enterprises deploying AI systems: staff using AI must have appropriate understanding. Governance framework requirements for high-risk system deployers becoming applicable. Enterprise AI governance programs should be operational, not in planning.

May 16, 2025

High-Risk System Full Compliance Required

Full conformity assessment obligations for Annex III high-risk systems. Technical documentation, human oversight requirements, accuracy standards, data governance, EU database registration. This is the primary near-term compliance deadline for enterprises with AI systems in employment, credit, insurance, and other regulated use cases. Organisations that have not started their conformity assessment programs for in-scope systems are already behind schedule.

May 16, 2025

National Authority Enforcement Scaling

EU member state national competent authorities are establishing enforcement capacity throughout 2026 and 2027. Early enforcement is expected to focus on egregious prohibited practices violations and high-profile high-risk system non-compliance. Enterprises with strong documentation and demonstrable governance processes are better positioned for regulatory scrutiny than those that rely on good intent without evidence.

Building Compliance Without Paralyzing Innovation

The enterprises that handle AI regulation best are those that treat compliance as a governance capability rather than a legal project. The distinction is important. A legal project has a completion date and an external objective. A governance capability is an ongoing organizational function that allows the enterprise to deploy AI efficiently while managing risk. Our AI governance advisory practice helps enterprises build the second kind.

The foundational governance practices that the EU AI Act and other frameworks require are largely practices that good AI governance demands regardless of regulation. Maintaining an inventory of AI systems in use, classifying them by risk, documenting their data inputs and intended use, maintaining oversight mechanisms for consequential decisions, and having an incident response plan for when models misbehave: all of these are practices that improve AI program quality independent of their compliance value. Regulations have created the external pressure to formalize what should already exist.

The compliance overhead that creates the most friction in practice comes from organizations that are trying to retrofit documentation onto systems that were deployed without it. The Enterprise AI Governance Handbook documents the 18 documentation categories that high-risk systems require and the governance operating model that makes maintaining them sustainable. Building that documentation as part of the deployment process, not after the fact, is the practice that separates organizations with manageable compliance burdens from those facing expensive retroactive documentation efforts.

Regulatory compliance is not the reason to build good AI governance. Responsible deployment, stakeholder trust, and predictable AI behavior are the reasons. Regulation just makes it non-negotiable. Treat compliance as a floor, not a ceiling.
Free White Paper
Enterprise AI Governance Handbook: EU AI Act, NIST AI RMF, and ISO 42001 Aligned
56 pages covering the four-tier risk classification framework, model lifecycle governance aligned to SR 11-7, EU AI Act compliance roadmap including all 18 documentation categories for high-risk systems, and the governance operating model that makes this sustainable.
Download Free →

What Is Coming Next

The regulatory trajectory is toward more specificity, more enforcement, and broader geographic coverage. The pattern established by GDPR is instructive: initial uncertainty about enforcement followed by significant fines that established compliance as genuinely material financial risk. AI regulation is following the same trajectory with compressed timelines.

In the near term, expect national competent authority enforcement actions under the EU AI Act to begin establishing precedent in 2026 and 2027, particularly in financial services and employment AI use cases which have the clearest regulatory exposure and the most visible high-risk system deployments. Expect US state-level legislation to proliferate, creating a patchwork of requirements that enterprises operating across state lines will need to track. Expect sector-specific guidance from financial regulators in the US, UK, and EU to become increasingly specific about AI model risk management requirements.

The enterprises best positioned for this trajectory are those building governance capabilities that can flex to meet evolving requirements rather than those targeting the minimum viable compliance posture for current requirements. The gap between the two approaches will widen materially over the next 24 months. Our approach to AI governance that does not kill innovation outlines how leading enterprises are building this flexibility into their governance architecture from the start.

Assess Your AI Governance and Compliance Readiness
Independent review of your AI governance posture against EU AI Act, NIST AI RMF, and sector-specific requirements. No vendor relationships. No agenda beyond yours.
Free Assessment →
The AI Advisory Insider
Weekly intelligence on enterprise AI including regulatory developments, enforcement actions, and practical compliance guidance.