Your enterprise just deployed a GenAI assistant to 8,000 employees. It has access to the document management system, the CRM, and internal wikis. You have strong traditional cybersecurity. What you almost certainly do not have is a single AI-specific security control.

That is the situation at 67% of enterprises actively deploying GenAI, according to our assessments across 200+ organizations. Security teams secured the perimeter. They secured the endpoints. They never modeled the threat of an AI system that can be manipulated through ordinary text inputs to exfiltrate data, bypass access controls, or take unauthorized actions.

340%
Increase in AI-targeted security incidents year over year. Most involve prompt injection, data exfiltration through LLM outputs, or shadow AI tools processing sensitive data without oversight.

GenAI security is not a subset of traditional cybersecurity. It requires a distinct threat model, distinct controls, and distinct governance. This article covers the specific threats, the control framework that addresses them, and how to sequence implementation when you cannot do everything at once.

The AI-Specific Threat Landscape

Traditional security models assume that threats come from outside, attempt to breach boundaries, and leave traces in network or access logs. GenAI breaks all three assumptions. Threats arrive through normal user inputs. They exploit the model's intended functionality. They often leave no trace in conventional logs.

Here are the six threat categories that matter most for enterprise GenAI deployments:

Highest Risk
Prompt Injection
Malicious instructions embedded in user inputs, documents, or retrieved content manipulate the model into ignoring system instructions, exfiltrating data, or taking unauthorized actions. Indirect injection via RAG-retrieved documents is particularly dangerous since it bypasses user-level access controls.
High Risk
Data Exfiltration via LLM Output
A model with broad document access can be asked to summarize, quote, or reason over sensitive content it should not surface. Without output filtering and access-level enforcement at retrieval, the model becomes a powerful data extraction tool.
High Risk
Unauthorized Tool-Calling in Agentic AI
Agentic AI systems that call APIs, write to databases, or execute code create a new attack surface. A compromised or manipulated agent can trigger financial transactions, modify records, or send communications without human authorization.
Medium Risk
Shadow AI and Unauthorized LLM Use
Employees using personal ChatGPT or other consumer LLMs with enterprise data bypass every control you have built. Our assessments find an average of 47 unauthorized AI tools in active use at enterprises with formal AI programs.
Medium Risk
Training Data Poisoning
If your organization fine-tunes models on internal data, poisoned training samples can introduce backdoors or biased behavior that persists into production. Supply chain risk extends to pre-trained base models from third-party providers.
Medium Risk
Sensitive Data in Context Windows
LLM context windows can contain PII, commercial secrets, or regulated data pulled from retrieval systems. Without strict data classification at ingestion and retention controls on conversation logs, sensitive data accumulates in accessible, unprotected stores.

The Five-Layer GenAI Security Control Framework

Effective GenAI security requires controls at five distinct layers. They are not substitutes for each other. A prompt injection defense at Layer 1 does not protect you from a misconfigured retrieval system at Layer 3. You need all five.

01
Input Validation and Prompt Defense
Deploy input screening that detects and blocks known prompt injection patterns before they reach the model. This includes injection attempt classifiers, jailbreak pattern matching, and role-play instruction detection. No classifier catches everything, so this layer reduces noise; it does not eliminate the threat. Pair with system prompt hardening that explicitly defines the model's role, permitted actions, and refusal instructions.
02
Retrieval-Level Access Controls
The most commonly skipped control in RAG deployments. User-level permissions must be enforced at the vector store query, not downstream. If a user cannot access a SharePoint folder, the RAG pipeline must not retrieve chunks from that folder when serving that user. Cross-tenant isolation in multi-tenant deployments requires namespace-level segregation at the vector database, not just application-layer filtering.
03
Output Filtering and Classification
LLM outputs must pass through classification before reaching users. At minimum: PII detection (regex plus ML classifier for unstructured PII), sensitive topic detection aligned to your data classification policy, and factual claim extraction for high-stakes use cases. In regulated environments, outputs referencing specific document sources require source attribution logging for audit purposes.
04
Tool Authorization Framework for Agentic AI
Every tool an AI agent can call must be listed in a Tool Authorization Matrix defining the permitted callers, permitted actions, rate limits, and human approval requirements. Read-only tools have lower requirements than write tools. Financial transaction tools and external communication tools require human-in-the-loop approval by default. Apply the principle of least privilege: if an agent does not need a tool to complete its current task, it should not have access.
05
Audit Logging and Anomaly Detection
Conventional access logs do not capture AI-specific threat patterns. You need conversation-level logging (with appropriate data retention controls), retrieval query logging to detect unusual document access patterns, tool-call logging for agentic systems, and anomaly detection tuned to AI behavior baselines. Logs must support forensic investigation of incidents, not just compliance reporting.
Does your AI security posture match your deployment scale?
Our AI Readiness Assessment includes a dedicated AI security dimension covering all five control layers. Most enterprises discover significant gaps between their GenAI deployment scope and their actual controls.
Take the Free Assessment →

Shadow AI: The Threat You Are Already Losing

Before you build sophisticated prompt injection defenses, address the more immediate threat: your employees are already using consumer AI tools with enterprise data, and you have no visibility into it.

The average enterprise has 47 unauthorized AI tools in active use. Consumer ChatGPT. Personal Claude accounts. Browser-based summarization tools. AI writing assistants that upload documents to third-party servers. Each one is a potential data exfiltration channel with terms of service that permit the vendor to use your data for model training.

Shadow AI governance requires three components working together. First, acceptable use policy that specifies approved tools, approved data classifications for each tool, and explicit prohibitions on which data types must never enter any external AI service. Second, technical controls that enforce the policy: browser extensions that detect and block known unauthorized AI services, data loss prevention rules tuned to AI upload patterns, network monitoring for API calls to consumer AI endpoints. Third, a sanctioned alternative that is good enough that employees choose it over workarounds. If your approved tool is worse than ChatGPT Plus, employees will use ChatGPT Plus regardless of policy.

47
Average number of unauthorized AI tools in active use at enterprises with formal AI programs. Most security teams have zero visibility into this activity.

RAG Security: Where Most Teams Get It Wrong

Retrieval-augmented generation is the dominant architecture for enterprise GenAI, and its security requirements are poorly understood. Most implementations focus on retrieval quality and response accuracy. The security questions are typically answered with "our existing SharePoint permissions handle it." They do not.

The key misunderstanding: SharePoint permissions control who can navigate to a file in SharePoint. They do not automatically control what content a vector database can retrieve and inject into an LLM context window. When you chunk documents and index them in a vector store, you create a new data access layer that exists outside your permission management infrastructure.

Correct RAG security requires metadata tagging every chunk at ingestion time with the access classification and permitted user groups from the source document. The retrieval query must include a permission filter that matches the authenticated user's group memberships against those metadata tags before returning results. Any chunk not explicitly permitted for the requesting user must be excluded, not just deprioritized.

For cross-tenant deployments, namespace isolation at the vector database level is not optional. Application-layer tenant filtering has been bypassed in production by indirect prompt injection attacks. The vector store itself must enforce the isolation.

Third-Party AI Risk Management

The AI tools and platforms you purchase from vendors extend your security perimeter into theirs. Standard vendor security questionnaires were not designed with AI-specific risks in mind. Before signing any AI vendor contract, get written answers to the following questions.

On data handling: What data from my prompts, documents, and outputs is used for model training? How long is conversation data retained? In what jurisdictions is data processed? Does the service comply with GDPR data processing requirements? What happens to my data if I terminate the contract?

On model integrity: How are base models validated before deployment? Is there a process for detecting backdoors or adversarial behavior in fine-tuned models? How are model updates disclosed and managed for production integrations? What is the incident notification timeline for security events affecting your platform?

Most enterprise AI vendors will provide clear answers to all of these. The ones who do not are the ones you should not deploy.

Research Download
Enterprise AI Security Guide
52 pages covering the full AI security framework: threat taxonomy, secure architecture patterns, LLM-specific controls, RAG security, supply chain risk, and a 12-month maturity roadmap. Required reading at 14 of the top 20 global banks.
Download the AI Security Guide →

Implementation Sequence: Where to Start

You cannot implement all five control layers simultaneously while maintaining deployment velocity. Here is the pragmatic sequence based on risk reduction per implementation effort.

Week 1 to 2: Inventory and shadow AI controls. You cannot secure what you do not know about. Audit all AI tools currently in use, approved and unauthorized. Establish a shadow AI monitoring capability. Issue a clear acceptable use policy with explicit data classification rules. This addresses the most immediate risk at the lowest implementation cost.

Week 2 to 4: Retrieval access controls. If you have RAG systems in production, this is your highest technical priority. Audit whether your vector store enforces user-level permissions at query time. If it does not, fix this before expanding document coverage. A retrieval system with misconfigured access controls is a cross-organizational data leak waiting to happen.

Week 4 to 8: Output filtering. Deploy PII detection and sensitive topic classification on LLM outputs. Start with your highest-risk applications (those with broad document access or external-facing outputs). Tune classifiers against your actual output distribution, not generic benchmarks.

Week 8 to 12: Tool authorization framework for agentic systems. If you do not have agentic AI in production yet, design the authorization framework before you deploy it. Retrofitting access controls onto production agentic systems is significantly harder and riskier than designing them in from the start.

Ongoing: Audit logging and red team exercises. Security posture degrades without continuous testing. Schedule quarterly red team exercises specifically targeting AI attack vectors. Maintain conversation and retrieval audit logs. Establish incident response playbooks that cover AI-specific scenarios before you need them.

EU AI Act Security Implications

The EU AI Act creates binding security requirements for high-risk AI systems, not just governance obligations. Article 9 requires risk management systems that include "procedures for human oversight" specifically addressing the security risks identified in Annex III. For enterprises operating in EU markets, GenAI systems used in HR decisions, credit assessments, or access to essential services are high-risk by default.

Beyond compliance, the Act's transparency and logging requirements create a security benefit: organizations that implement proper audit logging for EU AI Act compliance automatically have better incident detection capabilities. The compliance work and the security work are the same work.

Assess your GenAI security posture
Our free AI Readiness Assessment includes a dedicated security dimension covering all five control layers. Understand your actual exposure before your next deployment.
Free Assessment →
The AI Advisory Insider
Weekly intelligence on enterprise AI security, governance, and deployment from senior practitioners. No vendor content.