Your enterprise AI governance program almost certainly has a blind spot large enough to fail a regulatory audit. Every AI tool your organization uses, from the productivity copilot your HR team started using last quarter to the third-party risk scoring model embedded in your credit workflow, represents a governance exposure that most enterprises have never formally assessed. We consistently find that 73% of enterprises have not reviewed the data processing terms for their AI tools and 47 unauthorized AI applications are running in the average enterprise at any given time.

Third-party AI risk is distinct from traditional vendor risk in three important ways. The processing is often more opaque, the data involved is often more sensitive, and the regulatory accountability runs to your organization regardless of what the vendor's terms say. Under the EU AI Act, if you deploy a high-risk AI system, you are the deployer in the regulatory sense whether you built it or bought it. Your vendor's compliance posture is your compliance posture. Most enterprises have not internalized this yet.

Five Categories of Third-Party AI Risk

Before you can assess vendors, you need to understand what you are assessing. Third-party AI risk falls into five distinct categories, each requiring different assessment questions and mitigation approaches.

01
Data Processing and Privacy Risk
AI vendors often process data to improve their models. Your inputs, the documents you feed to an LLM, the customer transactions you run through a fraud API, the clinical notes you process through a coding tool, may be used as training data unless you have specifically contractual language preventing this. Many standard terms permit model training on customer data by default.
02
Model Performance and Bias Risk
You have no visibility into how the vendor's model was trained, what data was used, or how it was validated. If the model makes biased decisions affecting your customers, the regulatory exposure is yours. You cannot audit the vendor's training data. You can only assess the model's behavior on your population and build monitoring to detect disparate outcomes.
03
Regulatory Compliance Risk
As your deployer status under the EU AI Act and financial sector regulations makes clear, using a non-compliant third-party AI system in a high-risk context creates direct regulatory liability for your organization. Vendors that are not EU AI Act compliant cannot be used in high-risk applications regardless of contract indemnification language.
04
Vendor Stability and Concentration Risk
AI vendor markets are consolidating rapidly. The startup providing your document processing AI today may be acquired, pivoted, or shut down within 18 months. Production dependencies on AI vendors without adequate data portability, API stability guarantees, or exit provisions create operational continuity risk that is not reflected in most vendor risk registers.
05
Security and Supply Chain Risk
AI systems present novel attack surfaces. Prompt injection via third-party tools, model supply chain attacks, and adversarial manipulation of production AI APIs are threat vectors that your existing vendor security assessment frameworks were not designed to address. The security questions for an AI vendor are fundamentally different from the questions for a traditional SaaS vendor.
47
The average number of unauthorized AI tools running in a large enterprise at any given time. Only a fraction have been assessed for data processing practices, model bias risk, or EU AI Act compliance implications.

The Vendor AI Risk Assessment Checklist

Every AI vendor your organization uses for non-trivial workflows should be assessed against a consistent set of questions. The depth of the assessment should be proportional to the risk level of the use case. High-risk applications require full due diligence. Internal productivity tools require a lighter-touch self-certification check. Here is the full assessment framework, prioritized by criticality.

Data Training Policy: Does the vendor's default contract permit training their models on your data? Is there an explicit opt-out provision and have you exercised it?
Critical
Data Residency: Where is your data processed and stored? Does it cross jurisdictions that create GDPR or sector-specific regulatory issues for your use case?
Critical
EU AI Act Classification: For EU-deployed use cases, how does the vendor classify their system under the EU AI Act? Do they have conformity assessments for high-risk applications?
Critical
Model Card / System Card: Does the vendor publish a model card detailing training data sources, known limitations, performance benchmarks by demographic group, and known failure modes?
High
Fairness Testing: Has the vendor tested their model for demographic disparities? What protected attributes were tested, what metrics were used, and what were the results?
High
Explainability: For decision-affecting applications, can the vendor provide individual-level explanations? What is the mechanism and what level of detail is available?
High
SOC 2 / ISO 27001: Does the vendor have current third-party security certifications? Have you reviewed the scope and exceptions of those certifications?
High
Incident Response: Does the vendor have a documented AI incident response plan? What is the notification SLA for model performance degradation events affecting your use case?
High
API Stability: What is the vendor's API versioning and deprecation policy? How much notice will you receive before breaking changes, and what is their backward compatibility commitment?
Medium
Data Portability: In a vendor transition scenario, what data export capabilities exist? Can you export model outputs, inference logs, and configuration in portable formats?
Medium
Subprocessor Disclosure: Does the vendor use AI subprocessors (other AI APIs for underlying capabilities)? Have those subprocessors been assessed under the same framework?
Medium
Model Update Policy: How does the vendor notify customers of model updates? What testing is required before updates reach production, and can you pin to a specific model version?
Medium
Is your AI governance program covering third-party risk?
Take our free AI readiness assessment. Governance is one of six dimensions we score. Get a personalized report in 5 minutes with specific recommendations for your situation.
Take Free Assessment →

Contract Terms You Must Negotiate

Many enterprises treat AI vendor contracts as standard SaaS agreements with minor amendments. That approach creates material governance gaps. AI systems require specific contractual protections that standard terms do not include. Negotiate these before signing, not after an incident forces you to read the fine print.

No-Training Clause
Explicit prohibition on using your data to train, fine-tune, or improve the vendor's models without written consent. Standard terms often permit training by default or bury opt-out requirements in configuration settings.
Model Version Pinning
The right to pin production workflows to a specific model version for a minimum period (typically 90 days), with advance notice requirements before forced updates. Critical for SR 11-7 compliant model risk management in financial services.
Performance SLAs with Remedies
Measurable performance thresholds (accuracy, latency, availability) with financial remedies for breach. Generic uptime SLAs are insufficient for AI systems where model degradation may be gradual rather than binary.
Audit Rights
The right to receive model performance reports, fairness testing results, and security assessment findings on a defined schedule. For regulated industries, extend this to the right to share vendor documentation with regulators upon request.
Incident Notification Requirements
Specific notification timelines for AI-specific incidents: model performance degradation beyond defined thresholds, security incidents affecting your data, changes to data processing practices, and regulatory investigations involving your use case.
Data Deletion and Portability
Defined data deletion timelines upon contract termination, certification of deletion, and export capabilities for all data your organization has processed through the system. Specify formats and completeness requirements.
Every AI vendor contract your organization signs is also your regulatory compliance document. If the EU AI Act requires you to demonstrate your vendor's conformity assessment and the contract has no provision for accessing that documentation, you have a gap that cannot be closed retroactively.

Ongoing Monitoring and the AI Vendor Risk Register

Vendor assessment is not a one-time due diligence activity. AI systems change continuously: models are updated, processing practices evolve, and regulatory requirements shift. Your third-party AI risk program needs an ongoing monitoring cadence proportional to the risk level of each vendor relationship.

Frequency High-Risk Vendors Medium-Risk Vendors Low-Risk Vendors
Continuous API latency and error rate monitoring, model output distribution monitoring for drift Uptime and SLA compliance monitoring None required
Monthly Fairness metric review against defined thresholds, performance benchmark comparison Performance review against agreed metrics Annual review only
Quarterly Full assessment review, contract terms review, regulatory change impact assessment Abbreviated assessment refresh, contract review Annual review only
Annual Full re-assessment, security certification review, EU AI Act compliance refresh Full re-assessment Full assessment

The AI vendor risk register should document every third-party AI tool your organization uses, classified by risk tier, with the date of last assessment, assessment outcomes, outstanding findings, and remediation status. This register is the document your governance team, internal audit, and regulators will ask to see first. Maintain it as a living document, not a point-in-time inventory. See our related guides on shadow AI governance and enterprise AI risk management frameworks for the broader governance context.

Free White Paper
Enterprise AI Security Guide
The 52-page guide to AI-specific security threats, vendor supply chain risk assessment, third-party model vetting, and the four-level security maturity roadmap. Required reading at 14 of the top 20 global banks.
Download Free →

Key Takeaways for Enterprise AI Leaders

Third-party AI risk is the governance blind spot that will define the first wave of EU AI Act enforcement actions. Here is what your organization should do now:

  • Conduct an AI tool inventory. Before you can manage third-party AI risk, you need to know what tools your organization is using. A combination of network traffic analysis, procurement records, and business unit surveys typically reveals 3 to 5 times more AI tools than any central IT team believes are in use.
  • Classify every tool by risk tier. Not every AI application presents the same risk. Apply your risk classification criteria consistently and build a complete risk register with each tool's classification, owner, and assessment status.
  • Prioritize contract review for high-risk vendor relationships. The no-training clause, model version pinning, and performance SLA provisions are the highest-value improvements you can make without changing your technical implementation.
  • Build ongoing monitoring into your governance program. A vendor that passed due diligence 18 months ago may now be using different data processing practices, have updated their model significantly, or be facing regulatory investigations. Annual reviews are the minimum; quarterly reviews are appropriate for high-risk vendor relationships.
  • Understand your deployer status under the EU AI Act. If you deploy a third-party AI system in a high-risk context, you are the deployer in the regulatory sense. Vendor indemnification does not protect you from regulatory liability. Your vendors' compliance posture is your compliance posture.

Our AI governance advisory service helps enterprises build third-party AI risk programs that satisfy regulatory requirements while maintaining the operational flexibility to use the best available tools. See also our Enterprise AI Governance Handbook for the complete governance framework including third-party risk management protocols.

Assess Your AI Governance Maturity
5 minutes. 6 dimensions. Understand where your governance program has gaps before a regulatory inquiry reveals them for you.
Start Free →
The AI Advisory Insider
Weekly intelligence for enterprise AI leaders. No hype, no vendor marketing. Practical insights from senior practitioners.