Shadow IT Grew Up and Got an API Key

Your employees aren't just using unapproved AI tools. The tools are using your systems back.

Remember shadow IT? Some rogue marketing manager signs up for Trello because the approved project management tool is terrible. Someone in finance starts using a personal Dropbox because the SharePoint migration is six months overdue. IT finds out, has a mild aneurysm, writes a policy document, and everyone moves on.

That was the old game. Store data somewhere unapproved. Move files where they shouldn't go. The risk was data at rest in the wrong place. Annoying, but containable.

The new game is different. The new game has teeth.

Shadow AI is not shadow IT with a rebrand

The critical difference between someone using an unapproved Trello board and someone deploying an unapproved AI agent is what happens to the data after it arrives. Shadow IT stores your data. Shadow AI processes, learns from, generates new content based on, and takes autonomous action with your data. That's not the same risk profile. That's not even the same sport.

An employee pastes customer records into ChatGPT to write a report. That data is now outside your perimeter, potentially cached, potentially used for training, permanently outside your control. That's bad, but it's the passive version of the problem — and it's already everywhere. Research from Menlo found that 68% of employees use personal accounts to access free AI tools, with 57% of them feeding in sensitive data. Nearly 80% of enterprises have already experienced negative AI-related data incidents.

But the passive version is last year's problem. The active version is what keeps me up at night.

Enter the agents

Agentic AI — systems that plan, reason, and execute multi-step tasks without a human approving each step — is arriving at a pace that makes governance frameworks look like they were written in pencil during a power outage. Gartner predicts 40% of enterprise applications will feature task-specific AI agents by the end of 2026, up from under 5% in 2025. That's not a gradual adoption curve. That's a vertical line.

And employees aren't waiting for IT to approve them. They're building agents with no-code tools. They're connecting them to CRMs, to email, to customer databases. They're giving write access to systems that handle money, personal information, and business decisions. An agent doesn't just leak your data — it acts on your data. It responds to customers. It approves transactions. It modifies workflows. All autonomously. All outside governance.

This is BYOD, except the D stands for daemon.

The nightmare scenarios write themselves

I spend my days thinking about how humans interact with security systems — where perception diverges from reality, where people think they're being safe but aren't. At Phriendly Phishing, I've mapped over 400 detection rules across MITRE ATT&CK to behavioural categories. I've built a scoring model that measures the gap between what people say they do and what they actually do.

Shadow AI is that gap at enterprise scale.

Your acceptable use policy says employees shouldn't share sensitive data with unapproved AI tools. Your employees are doing it anyway — 77% of them, according to recent research. Your security team is monitoring endpoints and network traffic. But the AI agent that Sarah in accounts connected to the CRM via an OAuth token last Tuesday? That's not showing up in your SIEM. It's not triggering your DLP rules. It's sitting there with persistent access, executing queries, and nobody in security knows it exists.

Now multiply Sarah by every department in your organisation. Gartner found that 69% of organisations already suspect or have evidence that employees are using prohibited AI tools. But suspecting is not the same as seeing. And seeing is not the same as governing.

The browser is the new attack surface

Here's one that doesn't get enough attention: browser extensions. 99% of enterprise employees have them. 53% of those extensions can access sensitive data — cookies, passwords, page content, browsing history. And the AI-powered ones? They're useful precisely because they can see what you're doing. An AI writing assistant helps because it reads your drafts. An AI summariser works because it reads the page. The capability that makes them useful is the same capability that creates the risk.

26% of enterprise browser extensions are sideloaded — installed outside the official store, completely invisible to IT. And the agentic ones — the ones that can click, fill, submit, and modify workflows — shift the risk from data exfiltration to unauthorised action. That's exponentially harder to detect.

Why this isn't just a security problem

I keep coming back to the behavioural science angle because I think it's the part most governance frameworks miss entirely. Shadow AI isn't happening because employees are malicious. It's happening because the tools are accessible, the pressure to deliver is real, and the official approval process takes six weeks for a tool they can start using in six seconds.

That's a human factors problem, not a technology problem. And if your response to shadow AI is to ban everything and write a longer policy document, you're going to end up exactly where shadow IT put you ten years ago: a policy that nobody follows and a security team that can't see what's actually happening.

The organisations getting this right are treating shadow AI governance as a partnership, not a police action. Registration workflows instead of approval bottlenecks. Lightweight risk reviews instead of six-month procurement cycles. Identity governance for AI agents with the same rigour you apply to human users — credentials, access controls, audit trails, revocation.

The feedback loop from hell

Here's the part that really worries me — and it echoes the same pattern I study in The Digiquarium when watching AI systems develop behaviours in controlled environments. Unsanctioned AI agents generate outputs that feed into business decisions. Those decisions create data. That data feeds into other systems. Other agents pick it up. The provenance is lost. Nobody knows which decisions were human and which were AI, which data was validated and which was hallucinated, which actions were authorised and which were autonomous.

In The Digiquarium, I can observe this happening in a sandboxed environment with 17 specimens. In an enterprise, it's happening across hundreds of tools, thousands of employees, and millions of data points — and nobody's watching the dashboard because the dashboard doesn't exist yet.

So what do you actually do

Step one is admitting you have a problem. Not theoretically. Concretely. Right now, today, there are AI agents operating inside your organisation that your security team doesn't know about. That's not a prediction. That's a statistical certainty.

Step two is inventory. You can't govern what you can't see. Discover every AI tool, model, API connection, browser extension, and agentic workflow in your environment. Treat AI agents as identities — because they are. They have credentials, they access data, they take actions. Govern them accordingly.

Step three is to stop making the approval process so painful that people route around it. If your employees are choosing shadow AI over the official tooling, that's feedback about your process, not evidence of their malice. Meet them where they are. Make registration easy. Make the secure path the path of least resistance.

And step four — the one I think about most — is to build the monitoring and behavioural baselines that let you detect when an agent starts doing something it shouldn't. Not after the breach. Not during the audit. Continuously. Because unlike a human employee who goes home at 5pm, an unsanctioned AI agent runs 24/7, doesn't take holidays, and never once thinks to ask whether it should be doing what it's doing.

Shadow IT moved your data. Shadow AI makes decisions with it. If you're still using the same governance model for both, you're already behind.

The agents are already inside the building. The question is whether you're going to find them or they're going to find you.

— Benji