In the 2010s, employees started signing up for cloud apps without telling IT. Dropbox accounts multiplied. Slack workspaces popped up in every department. Shadow IT became one of the defining security challenges of the decade.

Now it is happening again, except this time the unauthorized tools are not just storing data. They are making decisions.

A Belitsoft report published April 6 found that the average enterprise currently operates 12 AI agents, projected to reach 20 by 2027. Roughly 50% of those agents operate completely independently, without connecting to other agents or any centralized oversight system.

Meanwhile, Microsoft's Cyber Pulse report found that 29% of employees have already turned to unsanctioned AI agents for work tasks. Nearly a third of your workforce may be delegating decisions to tools your organization does not know about, cannot monitor, and has never evaluated for risk.

This is not a hypothetical. This is happening now, across industries, at every company size.

The Deployment Illusion

The numbers tell a strange story. According to Belitsoft's data, 71% of businesses claim to have deployed AI agents. That sounds like broad adoption. But dig one layer deeper: just 11% of the intended agentic use cases from the previous year actually went into production.

That gap between "deployed" and "in production" is where the real risk lives. It means organizations are spinning up agents in pilots, proofs of concept, and departmental experiments, then leaving them running without the oversight, monitoring, or governance that a production system demands.

Deloitte's State of AI 2026 report, surveying 3,235 director-to-C-suite leaders across 24 countries, confirms the pattern at scale. Nearly 75% of companies plan to deploy agentic AI within two years, but only 21% report having a mature model for agent governance. The ambition is running three to four times faster than the infrastructure to manage it.

As Nitin Mittal, Deloitte's Global AI Leader, put it: "Across the enterprise, we're seeing massive ambition around AI, with organizations starting to pivot from experimentation to integrating AI into the core of the business."

The pivot is real. The guardrails are not.

The Numbers That Should Worry You

TrendAI's global survey of 3,700 business and IT decision-makers paints a picture of organizations that know they have a problem and are pushing forward anyway:

  • 67% felt pressured to approve AI despite security concerns, with 1 in 7 describing those concerns as "extreme."
  • 44% identify AI agents accessing sensitive data as their single biggest risk.
  • 31% lack observability or auditability over their deployed AI systems, meaning they cannot reconstruct what an agent did or why.
  • Only 38% have comprehensive AI policies in place.

Rachel Jin, Chief Platform & Business Officer of TrendAI (Trend Micro), summarized the disconnect: "Organizations are not lacking awareness of risk, they're lacking the conditions to manage it."

That distinction matters. This is not an awareness problem. It is an execution problem. Companies know their AI agents are ungoverned. They do not have the frameworks, processes, or expertise to govern them.

When Agents Make Mistakes, You Pay

The stakes are not abstract. Gartner predicts that by mid-2026, new categories of unlawful AI-informed decision-making will generate more than $10 billion in remediation costs across global enterprises.

Gartner also predicts that through 2027, costs from task-driven AI agent abuses will be 4x higher than those from multi-agent systems. In other words, the biggest financial risk is not sophisticated, coordinated AI systems. It is the individual, ungoverned agents scattered across your organization doing things nobody is watching.

As Mark Babington, Executive Director of the UK's Financial Reporting Council, told The Register: "You can't blame it on the box. If you use this technology, you are still accountable for it."

That accountability does not shift just because the agent was deployed by a department head without IT's knowledge, or because the pilot was never formally brought into production. If an AI agent makes a consequential decision on behalf of your organization, your organization owns the outcome.

What to Do About It Before It Costs You

The shadow IT crisis of the 2010s taught a clear lesson: you cannot ban unsanctioned tools. You have to provide better alternatives and build governance around what people are already using.

The same principle applies to agent sprawl. Here is where to start:

Audit what exists. You cannot govern agents you do not know about. Inventory every AI agent, authorized or not, operating in your organization. Include departmental experiments, abandoned pilots, and tools employees signed up for independently.

Establish observability. For every agent in production (or quasi-production), answer three questions: What data does it access? What decisions does it make? Who is accountable for its outputs? If you cannot answer all three, that agent is ungoverned.

Build governance before you scale. The Deloitte data is clear: 85% of companies plan to customize agents for their specific business needs. If you are building custom agents on top of the 12 you already have, without governance infrastructure, you are compounding risk with every deployment.

Assign accountability. Every AI agent needs an owner, a human who is responsible for its behavior, its data access, and its outputs. "The vendor manages it" is not accountability. "IT approved it" is not accountability. A named person who reviews its behavior on a defined cadence: that is accountability.

The companies that treated shadow IT as a governance challenge rather than a technology problem came out ahead. The same will be true for agent sprawl. The question is whether you build the governance now, while you have 12 agents, or later, when you have 20 and one of them has already made a decision you cannot explain.


Cairn Agents helps SMBs and mid-market companies build AI governance before the problems arrive. If your organization is deploying AI agents without a clear framework for oversight, start with an assessment.