67% of CISOs Can't See Their AI Footprint. Here's What They're Missing.
67% of CISOs Can't See Their AI Footprint. Here's What They're Missing.
A Pentera survey of 300 US CISOs, published March 2026, found that 67% report limited visibility into how AI is used across their organization. Zero percent — not one — claimed full visibility.
That stat alone should make you uncomfortable. But the more interesting question is: what, exactly, are they not seeing?
I run a MITM proxy that intercepts AI API calls at the network layer. We've analyzed over 26,000 requests across 19 source applications and 9 AI providers. The gap between what security teams think is happening and what's actually happening is wider than most people realize.
Here's what's in the blind spot.
81% of AI Requests Don't Identify Their Model
From our telemetry, 81.2% of intercepted AI requests arrive with no model identifier visible at the network layer. The request hits api.anthropic.com or api.openai.com, but unless you inspect the request body, you can't tell if it's GPT-4, Claude Opus, or a fine-tuned model burning $0.60 per request.
This matters because:
- Cost is model-dependent. A request to Claude Haiku costs roughly 1/30th what the same request to Claude Opus costs. If you can't see which model your teams are using, you can't budget.
- Risk is model-dependent. Different models have different context windows, different tool-calling capabilities, and different propensities for following prompt injections. A team that switched from Sonnet to Opus without telling anyone just changed the threat profile.
- Compliance is model-dependent. Some models are hosted in specific regions. Some process data through third-party infrastructure. If you can't identify the model, you can't map your data flows.
Traditional security tools — firewalls, SASEs, even most DLP solutions — see the destination URL and the TLS certificate. They know your developer made a request to Anthropic. They don't know it was Claude Opus 4.5 generating 150,000 tokens of code with your production database schema in the system prompt.
The 81% isn't a bug in the tooling. It's a fundamental limitation of not inspecting request bodies.
Half Your AI Traffic Has No Human Behind It
51.4% of the AI requests we intercept come from programmatic sources — Node.js processes, Axios HTTP clients, scripts, and CI/CD pipelines. Not a human sitting in a chat window. Not a developer using Cursor or Copilot. Automated code running in the background, making API calls with credentials it was provisioned.
This is the shift from "shadow AI" to "autonomous AI." Shadow AI is a human using ChatGPT without approval. Autonomous AI is a build pipeline that calls Claude 400 times per deployment and nobody on the security team knows it exists.
Browser extensions — the security industry's favorite answer to shadow AI — cannot see any of this. A browser extension monitors browser tabs. It can't see a Node.js process making HTTPS calls from a CI runner. It can't see a Python script calling the OpenAI SDK. It can't see Claude Code running in a terminal.
The Pentera survey found that 75% of CISOs rely on legacy security controls (endpoint, application, cloud) to protect AI systems. Only 11% have security tools purpose-built for AI infrastructure. That means three out of four security teams are trying to monitor AI traffic with tools that were built before AI traffic existed.
The 21% You Didn't Provision
21.4% of the AI requests in our telemetry come from applications the organization didn't provision. Claude Code alone — a single developer tool — generates more traffic than OpenAI, Google, and GitHub Copilot combined.
Industry data confirms this pattern at scale:
- 81% of employees use AI tools not approved by their organization (UpGuard, 2025)
- The average enterprise has 269 shadow AI tools per 1,000 employees (Reco.ai, 2025)
- 68% of employees use unauthorized AI tools at work, up from 41% in 2023 (Second Talent, 2026)
- Engineering teams have the highest shadow AI adoption at 79% (Second Talent, 2026)
The visibility problem isn't that CISOs don't care. It's that the tools they have can only see a fraction of what's happening. If your AI visibility strategy is "check which SaaS apps employees are accessing," you're seeing the browser-based chat traffic and missing the CLI tools, the SDKs, the agents, and the scripts — which account for the majority of actual AI usage.
What's Actually in the Requests
Here's the part that makes the visibility gap dangerous, not just inconvenient.
13% of AI prompts contain sensitive data, according to Lasso Security. From Nightfall AI's analysis, 96.4% of detected secrets in AI traffic are API keys and passwords — the credentials most likely to enable lateral movement if an AI provider is compromised or if the request is intercepted.
Harmonic Security found that 45.4% of sensitive AI prompts are sent through personal accounts, bypassing corporate controls entirely. And 26% of file uploads to AI chatbots contain sensitive information.
Now combine those numbers with the visibility gap. Two-thirds of CISOs can't see their AI traffic clearly. Half that traffic is automated with no human oversight. One in five requests comes from unapproved tools. And roughly one in eight requests contains sensitive data.
The math isn't complicated: sensitive data is flowing through channels your security team can't monitor, from tools they don't know about, to models they can't identify.
The Cloud Routing Irony
The enterprise response to this visibility problem has been to route all AI traffic through a cloud proxy — Netskope, Zscaler, or one of the other SASE vendors now marketing "AI Security" features.
The irony: to protect your data from being exposed to AI providers, you route it through a third party's cloud infrastructure first. You solve a data exposure problem by adding another data exposure vector.
And it's not cheap. Netskope runs $200-536/user/year for the full platform. Zscaler recently increased prices 35%. Palo Alto Networks' Prisma AIRS is enterprise-custom pricing, which is vendor-speak for "if you have to ask, you can't afford it."
For that price, you get visibility. But you also get:
- Latency. Every AI request now takes a round trip through someone else's infrastructure.
- Complexity. Deployment takes weeks to months. Netskope's own implementation guides reference 208-day timelines.
- A new trust boundary. Your prompts, your source code, your credentials — they're now visible to your security vendor's cloud. If Netskope's stock can drop 17.5% after a product launch, how confident are you that their infrastructure is the safest place for your intellectual property?
What Actually Works
The visibility gap closes when you inspect AI traffic at the network layer, on the device where it originates.
A MITM proxy running locally sees every HTTPS request to an AI provider. Not just the destination URL — the full request body. The prompt. The system instructions. The model selection. The tool calls. The embedded credentials. The response.
This isn't new technology. Charles Proxy, mitmproxy, and Fiddler have been doing HTTP inspection for decades. What's new is applying it specifically to AI traffic with purpose-built detection: secret pattern matching, cost attribution, model identification, and policy enforcement.
From our telemetry, here's what a proxy-level view gives you that URL-level visibility doesn't:
| What You See | URL-Level (Firewall/SASE) | Request-Body (MITM Proxy) |
|---|---|---|
| Which provider | Yes | Yes |
| Which model | No (81% of the time) | Yes |
| Token count / cost | No | Yes |
| Prompt content | No | Yes |
| Embedded secrets | No | Yes |
| Tool calls | No | Yes |
| Source application | Partial (User-Agent) | Full |
The 67% of CISOs who report limited visibility are, in most cases, working with URL-level data. They know their developers are calling api.anthropic.com. They don't know what's in the call.
The Budget Problem Is Real
The Pentera survey also found that 78% of organizations fund AI security from existing security budgets, and only 1% have a dedicated AI security budget line item.
This means AI security tools are competing for dollars against every other security priority — endpoint detection, SIEM, cloud posture, vulnerability management, incident response. At $200-500/user/year, enterprise AI security platforms lose that competition at most organizations. The math doesn't work when you're protecting a risk that leadership still considers speculative.
That's why most CISOs end up with nothing. Not because they don't want visibility — because the enterprise options are priced for Fortune 500 companies and the free options don't inspect traffic at the right layer.
The 89% of enterprises without AI-specific security tools aren't making a choice. They're stuck between tools that cost too much and tools that can't see enough. The market is failing them.
What You Can Do This Week
If you're one of the 67%, here's a practical starting point:
1. Inventory your AI traffic at the network level. Deploy a proxy that intercepts HTTPS requests to known AI provider endpoints. This takes minutes, not months. You'll immediately see which providers your team is hitting, from which applications, and how often.
2. Identify your automated AI consumers. Filter for requests with programmatic User-Agents (Node.js, Python-requests, Axios) vs. interactive tools (Cursor, Copilot, ChatGPT). The automated traffic is where your biggest blind spot is.
3. Sample prompt content for sensitive data. You don't need to scan every request on day one. Randomly sample 100 requests and check for credentials, source code, and PII. The results will either confirm you have a problem or give you data to justify a fuller rollout.
4. Count your models. How many distinct AI models are your teams using? If the answer is "we don't know," that's the clearest possible signal that your current visibility tools aren't sufficient.
5. Calculate your actual AI spend. Multiply token counts by model pricing. Compare this to what finance thinks you're spending on AI. The delta is usually surprising.
None of this requires an enterprise contract. None of it requires routing your data through a cloud. It requires a proxy that inspects traffic where it originates — on the device.
The Pentera 2026 AI Security & Exposure Benchmark surveyed 300 US CISOs. Full report.
CitrusGlaze telemetry data (CT) is from 26,565 intercepted AI requests across 19 source applications and 9 AI providers. Methodology.
Additional sources: Lasso Security, Harmonic Security, UpGuard, Reco.ai, Second Talent, Nightfall AI, Netskope.
Install CitrusGlaze free — see every AI request, every model, every secret, every cost. Local-first. Five minutes to deploy.
Scan yours free