Most enterprises discover their explainability problem when a regulator asks a question they cannot answer. The question is rarely abstract. It is specific: why did this model recommend denying this loan application, flagging this claim for fraud review, or declining this insurance policy? If your answer is "our model is accurate," you have failed the regulatory test. Accuracy is not explainability. The ability to explain a specific decision to a specific person affected by it is a distinct technical and governance requirement that most enterprise AI programs address far too late.

Explainability is not a feature you add to an existing model. It is an architectural choice made during model design that determines which techniques are available to you, what your operational overhead will be in production, and whether your system can comply with regulations that require decision-level explanation. Organizations that build first and add explainability later typically discover that their chosen model architecture is fundamentally incompatible with regulatory explanation requirements, requiring a complete rebuild.

Three Levels of Explainability: What Regulators Actually Require

Explainability requirements exist at three distinct levels, and conflating them leads to technical solutions that satisfy none of them. Understanding which level a regulation or business requirement specifies is the first step in designing an appropriate architecture.

L1
Global Explainability: How the Model Works in General
A description of which features generally drive model decisions across the training dataset. This level satisfies model validation requirements (SR 11-7), model documentation requirements, and internal model risk oversight. It does not satisfy adverse action notice requirements or individual rights requests under GDPR or the EU AI Act. Global explainability is necessary but not sufficient for most regulated use cases.
Techniques: Feature importance (global SHAP values), partial dependence plots, permutation importance, model cards
L2
Local Explainability: Why This Specific Decision Was Made
A description of the specific factors that drove a particular model output for a specific input instance. This level satisfies adverse action notice requirements (FCRA, ECOA), individual explanation rights under GDPR Article 22 and the EU AI Act, and the explanation requirements for high-risk AI systems. Local explainability must be generated at decision time and stored for the retention period required by applicable regulation.
Techniques: SHAP values (local), LIME, counterfactual explanations, individual prediction contribution breakdowns
L3
Contrastive Explainability: What Would Have Changed the Decision
A description of the minimum changes to input features that would have produced a different outcome. This is the most actionable form of explanation for the affected individual and is increasingly required for adverse action communications that must be "meaningful" rather than formulaic. Contrastive explanations answer the question "what would I need to change to get a different result" — the question actually relevant to the person affected by the decision.
Techniques: Counterfactual generation (DiCE, CFEC), actionable recourse algorithms, boundary proximity analysis
84%
of financial services AI programs we assess have global explainability but lack the local explanation infrastructure required to generate individual adverse action notices without manual intervention. This creates operational bottlenecks and regulatory exposure simultaneously.

Regulatory Explainability Requirements by Sector

Different regulatory frameworks impose different explainability requirements with different technical implications. The table below summarizes the requirements most relevant to enterprise AI programs. Organizations operating across multiple jurisdictions face potentially inconsistent requirements that must be reconciled at the architecture level.

Financial Services — US
FCRA and ECOA Adverse Action
Adverse action notices must include the specific reasons for adverse action stated in terms understandable to the applicant. The Federal Reserve's SR 11-7 guidance requires model validation to include assessment of conceptual soundness, ongoing monitoring, and outcomes analysis. SHAP-based adverse action reason codes are now widely accepted by regulators but must be generated at the individual transaction level.
Technical requirement: Individual-level SHAP values stored per decision with 5-year retention
European Union
GDPR Article 22 and EU AI Act
GDPR Article 22 grants individuals the right not to be subject to solely automated decisions with significant effects and the right to obtain a meaningful explanation of the logic involved. The EU AI Act additionally requires high-risk AI systems to provide explanations to affected persons and to authorities. For high-risk systems the explanation must be sufficient to enable human oversight and challenge.
Technical requirement: Explainability infrastructure documented in technical file, human oversight mechanism required for high-risk decisions
Healthcare — US
FDA SaMD and Clinical AI
FDA guidance on AI and machine learning-based software as a medical device emphasizes the importance of transparent reporting of model performance and limitations. Clinical decision support tools that are not locked and that modify recommendations over time require documentation of the basis for recommendations. Clinician adoption research consistently shows that unexplained recommendations are ignored or overridden, making explainability a practical requirement independent of regulatory compliance.
Technical requirement: Recommendation rationale accessible to clinician at point of care, override logging
Insurance — Multiple
State Insurance Regulators
Multiple US states have enacted or proposed AI insurance fairness regulations requiring insurers to explain AI-driven underwriting and claims decisions. New York DFS and the NAIC model bulletin require insurers to document the reasonableness of AI factors used in underwriting and to be able to explain why specific risk factors are relevant to loss prediction. Discriminatory proxy variables in AI models create significant regulatory exposure.
Technical requirement: Feature selection justification per variable, proxy variable testing, adverse impact documentation
Does your AI governance framework address explainability requirements?
Our free assessment evaluates your AI governance readiness including explainability architecture, model documentation, and regulatory compliance posture.
Take Free Assessment →

The Architecture Decision: Inherently Interpretable vs Post-Hoc Explanation

The most consequential explainability decision is made before any model code is written: do you choose an inherently interpretable model architecture, or do you build a complex model and apply post-hoc explanation methods? This is not a purely technical choice. It has operational, regulatory, and accuracy implications that must be understood before commitment.

ApproachBest ForLimitationsRegulatory Acceptance
Logistic Regression with Constraints
Inherently interpretable. Coefficients are the explanation.
Binary classification, adverse action, regulated credit decisions where full transparency is required Lower accuracy on complex non-linear relationships. Requires significant feature engineering to compete with tree-based models. HIGH — widely accepted, audit-ready
Gradient Boosted Trees with Monotone Constraints
Post-hoc explanation via SHAP. Monotone constraints enforce directional consistency.
Credit risk, fraud detection, churn prediction where accuracy matters and SHAP explanations are accepted SHAP explanations approximate feature contributions and may be inconsistent for individual predictions near decision boundaries. HIGH — widely accepted in US financial services with SR 11-7 validation
Deep Neural Networks with SHAP or LIME
Post-hoc approximation. Explanation is separate from the model.
Image classification, NLP tasks, complex pattern recognition where no other approach achieves required accuracy Post-hoc methods approximate explanations and may be unfaithful to the model's actual decision logic. LIME explanations can be unstable. Does not satisfy regulatory requirements that require the explanation to reflect the model's actual reasoning. VARIES — increasing regulatory scrutiny in high-risk use cases
Counterfactual Explanation Systems
Generates "nearest possible different outcome" explanations independently of model type.
Cases where actionable recourse is required: what could the applicant change to get a different result? Computational overhead at scale. Must ensure generated counterfactuals are actionable for the individual, not just mathematically proximate. EMERGING — increasingly required for GDPR Article 22 compliance
The model with the best accuracy on your test set is not necessarily the right model for your use case. If you cannot explain its decisions to a regulator, a customer, or a jury, accuracy is irrelevant. Architecture choice must include explainability requirements from the beginning.

The Implementation Checklist for Enterprise Explainability

Based on our work with regulated enterprise AI programs across financial services, healthcare, and insurance, these are the implementation requirements that are most commonly missed. Each one represents a gap we have found in enterprise AI programs that believed they had addressed explainability.

  • Explanation generation at inference time, not batch: Explanations must be generated when the decision is made and stored alongside the decision record. Batch-generating explanations retrospectively is operationally fragile and may not reflect the actual model state at the time of the decision.
  • Human-readable translation layer: Raw SHAP values are not adequate for adverse action notices or individual rights responses. A translation layer that converts feature contributions into plain language with regulatory-compliant framing is required for customer-facing explanation systems.
  • Explanation consistency monitoring: Monitor whether explanation characteristics drift over time independently of model performance. If the distribution of top adverse action reason codes shifts significantly, this is a signal that the model's decision logic has changed, potentially from data drift or from upstream feature pipeline changes.
  • Override documentation: When a human reviewer overrides an AI recommendation, document both the original AI output with its explanation and the human's override rationale. This creates the accountability trail required for model validation and regulatory audit.
  • Explanation audit trail: Maintain an immutable record of every explanation generated, including the model version, the input features, the output, and the explanation components. Retention periods vary by regulation but financial services typically require five years minimum.
  • Proxy variable testing: Run regular tests for protected characteristic proxy effects in the features that drive explanations. A feature that correlates with a protected characteristic and appears in a majority of adverse action explanations is a regulatory red flag even if it is facially neutral.

For the complete model governance framework that includes explainability architecture, model lifecycle governance, and regulatory alignment, our Enterprise AI Governance Handbook covers all 56 pages of what enterprises in regulated industries need in place. You can also explore how explainability integrates into our AI governance advisory service and how it connects to the broader AI risk management framework that high-performing enterprises use.

Free White Paper
Enterprise AI Governance Handbook (56 pages)
The complete governance framework for regulated enterprise AI: four-tier risk classification, EU AI Act compliance roadmap, model lifecycle governance aligned to SR 11-7, and explainability architecture by sector.
Download Free →
Building AI in a regulated industry?
Our senior advisors have built explainability infrastructure for financial services, healthcare, and insurance AI programs across 28 countries. We help you design explainability into the architecture before it becomes a rebuild.
Talk to a Senior Advisor →
The AI Advisory Insider
Weekly enterprise AI intelligence without vendor bias. Read by 12,000 senior AI practitioners.