Why SR 11-7 Was Not Built for AI
The Federal Reserve's SR 11-7 guidance on model risk management, issued in 2011, established the framework that governs how banks identify, validate, and control model risk. For a decade, it worked reasonably well for the statistical models that dominated financial services: credit scorecards, regression-based pricing models, CCAR stress testing models, and similar systems with interpretable outputs and well-understood failure modes.
AI changes every assumption SR 11-7 makes. Statistical models have closed-form mathematics that can be examined. Neural networks do not. Statistical models have stable behavior on similar inputs. Large language models do not. Statistical models have well-understood out-of-sample performance bounds. Gradient boosting models trained on regime-specific historical data can fail dramatically in novel regimes in ways that classical model validation frameworks are not designed to detect.
The result is a regulatory gap. Most financial institutions are applying SR 11-7 to AI models with modifications that are insufficient for the actual risk profile of these systems. Regulators have begun to notice. OCC, Federal Reserve, and FDIC examiners are asking more sophisticated questions about AI model governance. The institutions best positioned are those that have proactively updated their MRM frameworks rather than waiting for formal updated guidance.
The OCC's 2023 proposed guidance on model risk management explicitly acknowledged that SR 11-7's concepts apply to AI and machine learning models but noted that additional considerations are necessary given the unique characteristics of these models. Examination findings increasingly cite inadequate AI-specific model risk controls as material gaps. The formal updated guidance will codify what progressive institutions are already building.
Where AI Breaks Traditional MRM
The specific characteristics of AI systems that create material gaps in traditional MRM frameworks:
An AI-Extended MRM Framework
The foundational structure of SR 11-7 remains valid for AI: model development, model validation, and ongoing monitoring and control. What must change is the specific content of each stage and the standards applied within it.
Is Your MRM Framework AI-Ready?
Our financial services advisors assess your current MRM framework against AI-specific requirements and build the gap remediation roadmap before examiners arrive.
Request an MRM Assessment →Generative AI in the MRM Framework
Large language models and other generative AI systems present the most significant MRM challenge financial institutions have faced since the proliferation of complex derivative pricing models in the 1990s. The challenge is not technical: it is definitional and governance-structural.
The definitional question is whether foundation models accessed via API constitute "models" under SR 11-7. The functional answer must be yes: if a financial institution uses a foundation model to generate content, analyze documents, or inform decisions, that use creates model risk regardless of whether the institution built the model. The institution cannot outsource model risk to the foundation model provider. This is the same logic SR 11-7 applies to vendor models.
The governance challenge is that foundation models change in ways that institutions cannot control or predict. A model accessed via an API may be retrained, fine-tuned, or replaced by the vendor without notice. Traditional model change management frameworks are not designed for this. Institutions must establish contractual requirements for change notification and technical monitoring to detect behavioral changes regardless of vendor notification.
The practical MRM requirements for foundation model use include: inventory of all foundation model API integrations, documentation of the specific use case and decision influence of each integration, testing protocols for each new model version or API update, behavioral monitoring in production to detect unexpected output changes, and human review gates for any generative output that directly informs a material decision.
Fair Lending Integration
AI model risk management in financial services has a fair lending dimension that does not exist for most other industries. ECOA, Fair Housing Act, and CFPB enforcement create legal obligations that make bias management a model risk management requirement, not just a governance best practice.
The fair lending integration points in MRM include: adverse impact analysis in model validation for any model used in credit, housing, or employment decisions; pre-deployment testing against appropriate fairness criteria; production monitoring for disparate impact; adverse action notice capability for all AI-driven credit decisions; and documentation sufficient to demonstrate to examiners that AI models in consumer credit use cases do not produce illegal discrimination.
The supervisory expectation is not that AI models will never produce disparate impact. It is that institutions know whether their models produce disparate impact, can explain why, can demonstrate they have considered less discriminatory alternatives, and have implemented monitoring to detect fairness degradation.
AI Governance Handbook
Detailed MRM framework extensions for AI, fair lending integration guidance, and documentation templates aligned with regulatory expectations.
Download Free →What Examiners Are Looking For
Based on current examination trends and regulatory communications, these are the areas where AI MRM gaps are most commonly cited:
- Scope completeness: AI systems that influence decisions but are not classified as models in the inventory. Shadow AI built by business units without MRM review. Foundation model integrations without formal model risk assessment.
- Validator qualifications: Model validators without sufficient ML expertise to effectively challenge complex AI model architectures. This is the most common practical gap, because ML expertise is expensive and MRM functions have historically not required it.
- Documentation completeness: AI models in production without documentation sufficient for examination. Training data provenance records that cannot be produced. Absence of model cards or equivalent documentation artifacts.
- Ongoing monitoring adequacy: Monitoring frameworks designed for statistical models applied to AI without AI-specific monitoring augmentation. Absence of distributional shift detection. Absence of behavioral consistency monitoring.
- Change management gaps: AI systems that change behavior through continuous retraining or upstream data changes without triggering formal change management review.
Institutions that proactively address these gaps before examination are in substantially better position than those that wait for findings. The cost of building AI-extended MRM capability is a fraction of the cost of regulatory remediation orders.
Building MRM Capability for AI
The practical path to AI-ready MRM requires investment in three areas: expertise, tooling, and governance.
Expertise means adding ML engineers or data scientists to validation teams, or contracting with external validators who have this expertise. The alternative, applying traditional MRM validation approaches to AI models without the technical expertise to evaluate them, is not effective challenge and will not satisfy sophisticated examiners. Institutions that cannot build this expertise internally need a credible external partner strategy.
Tooling means investment in AI-specific validation software, behavioral monitoring infrastructure, fairness evaluation pipelines, and model documentation management systems that can handle the expanded documentation requirements of AI models at scale.
Governance means updating the MRM policy, standards, and procedures to reflect AI-specific requirements. This includes updating the model definition to capture AI systems, updating validation standards to include AI-specific validation requirements, updating monitoring standards to include behavioral and distributional monitoring, and updating the model inventory to capture foundation model integrations.
For the broader AI governance framework into which MRM fits, see our enterprise AI governance framework guide. For the audit methodology that supports MRM review, see our AI audit guide. For the responsible AI program that provides the policy framework, see our responsible AI practical guide. To discuss MRM program assessment or build, visit our AI Governance service page.
Get Ahead of AI Model Risk Before Examiners Arrive
Our financial services advisors help banks, insurers, and asset managers build AI MRM capability that satisfies regulators and protects institutions from the failure modes unique to AI.