A survey finding where every single respondent agrees on something is unusual. It is worth paying attention to.
According to the Kiteworks 2026 Data Security, Compliance & Risk Forecast Report, 100% of organizations surveyed have agentic AI — AI systems that can take actions autonomously, not just answer questions — on their roadmap. Zero exceptions.
That number became a focal point at RSAC 2026 last week. Not because the adoption is surprising — everyone expected it — but because of what came with it: most organizations cannot actually govern the AI agents they already have.
The Governance Gap Is Wider Than You Think
The same research found that 63% of organizations cannot enforce purpose limitations on their AI agents — meaning they cannot reliably ensure an agent only does what it was designed to do. 60% cannot terminate a misbehaving agent. And 55% cannot isolate their AI systems from the broader corporate network.
Read that again: more than half of organizations deploying AI agents have no kill switch and no containment boundary.
This is not a theoretical risk. The State of AI Agent Security 2026 Report found that 88% of organizations reported confirmed or suspected AI agent security incidents in the past year.
Shadow AI Is the Multiplier
The governance gap gets worse when you account for agents that security teams do not even know about. BeyondTrust's Phantom Labs found a 466.7% year-over-year increase in enterprise AI agents, with the majority operating as a "shadow AI workforce" — agents with privileged access that security teams cannot see or govern.
When these shadow systems cause breaches, the costs are severe. IBM's 2025 Cost of a Data Breach Report found that shadow AI breaches cost $670,000 more than standard incidents, totaling $4.63 million on average.
For SMBs without dedicated security operations centers, the proportional risk is even higher. You do not need a Fortune 500 breach to be existentially damaging when your annual revenue is in the single-digit millions.
The Vendor Response Is Starting — But It Is Not Enough
RSAC 2026 saw several major vendor announcements targeting this gap:
- Microsoft launched Agent 365, a control plane with Entra Agent ID for agent identity management.
- TrendAI partnered with NVIDIA to build an agent governance platform.
- Astrix expanded its agent security platform.
These are steps in the right direction. But enterprise tooling does not solve the fundamental problem for small and mid-sized businesses: you need to know what AI agents exist in your environment before you can govern them.
What You Should Do Now
First, discover what you have. Conduct a shadow AI audit. Your employees are almost certainly using AI tools — browser extensions, chat interfaces, coding assistants, automation platforms — that IT has not approved and does not monitor. You cannot secure what you cannot see.
Second, define boundaries before you scale. Before deploying any new AI agents, establish three things: what the agent is allowed to do (purpose limitation), how to stop it if something goes wrong (kill switch), and what data it can access (containment). If you cannot answer all three, you are not ready to deploy.
Third, treat AI agents like employees, not software. They need identity, access controls, monitoring, and offboarding procedures. The same governance principles you apply to a new hire should apply to any autonomous system acting on your company's behalf.
If you are not sure what AI tools your team is already using, start with our free AI readiness assessment. Knowing what you have is the first step to controlling it.