Your enterprise just deployed a GenAI assistant to 8,000 employees. It has access to the document management system, the CRM, and internal wikis. You have strong traditional cybersecurity. What you almost certainly do not have is a single AI-specific security control.
That is the situation at 67% of enterprises actively deploying GenAI, according to our assessments across 200+ organizations. Security teams secured the perimeter. They secured the endpoints. They never modeled the threat of an AI system that can be manipulated through ordinary text inputs to exfiltrate data, bypass access controls, or take unauthorized actions.
GenAI security is not a subset of traditional cybersecurity. It requires a distinct threat model, distinct controls, and distinct governance. This article covers the specific threats, the control framework that addresses them, and how to sequence implementation when you cannot do everything at once.
The AI-Specific Threat Landscape
Traditional security models assume that threats come from outside, attempt to breach boundaries, and leave traces in network or access logs. GenAI breaks all three assumptions. Threats arrive through normal user inputs. They exploit the model's intended functionality. They often leave no trace in conventional logs.
Here are the six threat categories that matter most for enterprise GenAI deployments:
The Five-Layer GenAI Security Control Framework
Effective GenAI security requires controls at five distinct layers. They are not substitutes for each other. A prompt injection defense at Layer 1 does not protect you from a misconfigured retrieval system at Layer 3. You need all five.
Shadow AI: The Threat You Are Already Losing
Before you build sophisticated prompt injection defenses, address the more immediate threat: your employees are already using consumer AI tools with enterprise data, and you have no visibility into it.
The average enterprise has 47 unauthorized AI tools in active use. Consumer ChatGPT. Personal Claude accounts. Browser-based summarization tools. AI writing assistants that upload documents to third-party servers. Each one is a potential data exfiltration channel with terms of service that permit the vendor to use your data for model training.
Shadow AI governance requires three components working together. First, acceptable use policy that specifies approved tools, approved data classifications for each tool, and explicit prohibitions on which data types must never enter any external AI service. Second, technical controls that enforce the policy: browser extensions that detect and block known unauthorized AI services, data loss prevention rules tuned to AI upload patterns, network monitoring for API calls to consumer AI endpoints. Third, a sanctioned alternative that is good enough that employees choose it over workarounds. If your approved tool is worse than ChatGPT Plus, employees will use ChatGPT Plus regardless of policy.
RAG Security: Where Most Teams Get It Wrong
Retrieval-augmented generation is the dominant architecture for enterprise GenAI, and its security requirements are poorly understood. Most implementations focus on retrieval quality and response accuracy. The security questions are typically answered with "our existing SharePoint permissions handle it." They do not.
The key misunderstanding: SharePoint permissions control who can navigate to a file in SharePoint. They do not automatically control what content a vector database can retrieve and inject into an LLM context window. When you chunk documents and index them in a vector store, you create a new data access layer that exists outside your permission management infrastructure.
Correct RAG security requires metadata tagging every chunk at ingestion time with the access classification and permitted user groups from the source document. The retrieval query must include a permission filter that matches the authenticated user's group memberships against those metadata tags before returning results. Any chunk not explicitly permitted for the requesting user must be excluded, not just deprioritized.
For cross-tenant deployments, namespace isolation at the vector database level is not optional. Application-layer tenant filtering has been bypassed in production by indirect prompt injection attacks. The vector store itself must enforce the isolation.
Third-Party AI Risk Management
The AI tools and platforms you purchase from vendors extend your security perimeter into theirs. Standard vendor security questionnaires were not designed with AI-specific risks in mind. Before signing any AI vendor contract, get written answers to the following questions.
On data handling: What data from my prompts, documents, and outputs is used for model training? How long is conversation data retained? In what jurisdictions is data processed? Does the service comply with GDPR data processing requirements? What happens to my data if I terminate the contract?
On model integrity: How are base models validated before deployment? Is there a process for detecting backdoors or adversarial behavior in fine-tuned models? How are model updates disclosed and managed for production integrations? What is the incident notification timeline for security events affecting your platform?
Most enterprise AI vendors will provide clear answers to all of these. The ones who do not are the ones you should not deploy.
Implementation Sequence: Where to Start
You cannot implement all five control layers simultaneously while maintaining deployment velocity. Here is the pragmatic sequence based on risk reduction per implementation effort.
Week 1 to 2: Inventory and shadow AI controls. You cannot secure what you do not know about. Audit all AI tools currently in use, approved and unauthorized. Establish a shadow AI monitoring capability. Issue a clear acceptable use policy with explicit data classification rules. This addresses the most immediate risk at the lowest implementation cost.
Week 2 to 4: Retrieval access controls. If you have RAG systems in production, this is your highest technical priority. Audit whether your vector store enforces user-level permissions at query time. If it does not, fix this before expanding document coverage. A retrieval system with misconfigured access controls is a cross-organizational data leak waiting to happen.
Week 4 to 8: Output filtering. Deploy PII detection and sensitive topic classification on LLM outputs. Start with your highest-risk applications (those with broad document access or external-facing outputs). Tune classifiers against your actual output distribution, not generic benchmarks.
Week 8 to 12: Tool authorization framework for agentic systems. If you do not have agentic AI in production yet, design the authorization framework before you deploy it. Retrofitting access controls onto production agentic systems is significantly harder and riskier than designing them in from the start.
Ongoing: Audit logging and red team exercises. Security posture degrades without continuous testing. Schedule quarterly red team exercises specifically targeting AI attack vectors. Maintain conversation and retrieval audit logs. Establish incident response playbooks that cover AI-specific scenarios before you need them.
EU AI Act Security Implications
The EU AI Act creates binding security requirements for high-risk AI systems, not just governance obligations. Article 9 requires risk management systems that include "procedures for human oversight" specifically addressing the security risks identified in Annex III. For enterprises operating in EU markets, GenAI systems used in HR decisions, credit assessments, or access to essential services are high-risk by default.
Beyond compliance, the Act's transparency and logging requirements create a security benefit: organizations that implement proper audit logging for EU AI Act compliance automatically have better incident detection capabilities. The compliance work and the security work are the same work.