← Back to blog

How to Find Every AI Tool Your Developers Are Using

· Pierre
shadow AI discovery find AI tools employees using AI tool inventory shadow AI detection unauthorized AI tools AI governance developer AI usage AI security audit

How to Find Every AI Tool Your Developers Are Using

You think your team uses three AI tools. They use twelve.

That's not a guess. The average enterprise has 269 shadow AI tools per 1,000 employees (Reco.ai, 2025). And 81% of employees use AI tools their organization hasn't approved (UpGuard, 2025).

The gap between what IT knows about and what developers actually use has never been wider. The average shadow AI incident costs $650,000 (IBM Cost of Data Breach Report, 2025). Not because shadow AI is inherently dangerous — but because you can't secure what you can't see.

This post is a practical guide. Not theory. Not a vendor pitch deck. Here's how to actually find every AI tool your developers are using, what you'll discover when you do, and what to do about it.

Why Traditional Discovery Fails for AI

Before we get to what works, here's why your current tools are blind.

Firewall and DNS logs

Your network team can see connections to api.openai.com, api.anthropic.com, and generativelanguage.googleapis.com. That gives you a list of providers. It doesn't tell you which tools are making those calls.

Claude Code, Cursor, custom Python scripts, and CI/CD pipelines all hit api.anthropic.com. In our telemetry, 91.2% of AI requests went to a single provider across 19 different source applications (CitrusGlaze Telemetry). DNS logs show one domain. The reality is 19 tools.

Browser extensions

Harmonic Security, LayerX, Nightfall, Strac — they all run in the browser. If a developer uses ChatGPT in Chrome, the extension sees it.

But 51.4% of AI requests in our environment come from outside the browser (CitrusGlaze Telemetry). Claude Code runs in the terminal. GitHub Copilot runs in VS Code. Python scripts use the OpenAI SDK. Agents make API calls from Node.js. A browser extension sees none of this.

SaaS management platforms

Wing Security and Reco.ai discover AI tools by scanning OAuth grants, SSO logs, and SaaS API integrations. This catches tools that authenticate through your IdP. It does not catch tools where a developer signed up with a personal email, tools accessed via API key, or local tools that don't use SSO at all.

In practice, the AI tools with the highest security risk — the ones accessing your codebase, running with your credentials, sending your source code to API endpoints — are exactly the ones that don't show up in SaaS discovery.

The Four Discovery Methods (And What Each Misses)

Here's a honest assessment of every approach:

Method What It Catches What It Misses Cost
DNS/Firewall logs AI provider domains Which tools, what's sent, how much it costs Free (you already have these)
Browser extensions Browser-based AI (ChatGPT, Claude.ai, Gemini) CLI tools, IDE extensions, SDKs, agents, scripts $20-200/user/year
SaaS discovery (Wing, Reco) OAuth/SSO-authenticated AI apps API-key tools, personal accounts, CLI tools $1,500+/year
Network proxy (MITM) Everything — every protocol, every tool, every request Nothing (if configured for AI endpoints) Free-$20/user/month

The first three methods overlap. The fourth covers everything the others miss.

How to Run a Shadow AI Audit in 5 Minutes

Here's the practical part. Two approaches — one free and manual, one automated.

Approach 1: DNS Log Analysis (Free, 10 minutes)

Pull the last 30 days of DNS queries from your resolver or firewall. Filter for known AI provider domains:

api.openai.com
api.anthropic.com
generativelanguage.googleapis.com
api.cohere.ai
api.mistral.ai
api.groq.com
api.deepseek.com
api.together.xyz
api.fireworks.ai
api.perplexity.ai
copilot-proxy.githubusercontent.com
codeium.com
api.cursor.sh

Count unique source IPs per domain. This gives you a lower bound: "At least N machines are making requests to OpenAI." It won't tell you which application, what's being sent, or how much it costs. But it's free and you can do it right now.

Approach 2: Network Proxy (5 minutes, full visibility)

A MITM proxy sitting on each developer machine intercepts HTTPS traffic to AI providers. Because it terminates TLS, it can inspect the full request body — the prompt, the model, the tokens, and any secrets embedded in the payload.

With CitrusGlaze, the setup is:

bash install.sh
citrusglaze start

Within minutes, the dashboard shows every AI request from every application on the machine: which tool sent it, which provider received it, which model was used, how many tokens it consumed, and whether any secrets were detected in the prompt.

From our telemetry of 26,565 intercepted requests, here's what discovery looks like:

Application Requests Share
Node.js (generic scripts) 13,655 51.4%
Claude Code (CLI) 5,685 21.4%
Axios (HTTP library) 3,284 12.4%
Proxy clients 2,029 7.6%
curl 578 2.2%
Claude CLI 500 1.9%
Google API client 411 1.5%
GitHub Copilot 155 0.6%
Gemini CLI 30 0.1%

The biggest source of AI traffic — generic Node.js scripts — would be invisible to browser extensions, SaaS discovery, and even basic DNS analysis (it all resolves to the same api.anthropic.com).

What You'll Find (And What to Do About It)

Based on industry data and our own experience, here's what a shadow AI audit typically surfaces:

1. More tools than expected

75% of CISOs who look find unauthorized AI tools (Lasso Security, 2026). The median enterprise uses 87 AI apps (Netskope, 2025). Most IT teams know about 5-10.

What to do: Don't block them. Catalog them. The goal is visibility, not enforcement — at first. Developers adopt AI tools because they're productive. Blocking creates workarounds that are harder to monitor.

2. Personal accounts sending corporate data

45.4% of sensitive AI prompts are sent through personal accounts, bypassing corporate controls entirely (Harmonic Security, 2025). A developer logs into their personal ChatGPT, pastes a code snippet from work, and your DLP never knows.

What to do: This is why network-layer monitoring matters. It catches the traffic regardless of which account is used. You don't need to control the account — you need to see the data.

3. Secrets in prompts

13% of AI prompts contain sensitive data (Lasso Security, 2025). In AI traffic specifically, 96.4% of detected secrets are API keys and passwords (Nightfall AI, 2025). These aren't hypothetical risks — they're credentials that grant access to production systems.

What to do: Scan prompts at the network layer before they leave the machine. CitrusGlaze's Rust engine checks every request against 210+ secret patterns and can block or redact critical credentials in real-time.

4. Automated AI traffic with no human oversight

Over half of AI requests in our environment are programmatic — scripts, agents, CI/CD pipelines (CitrusGlaze Telemetry). Only 29% of organizations feel prepared to manage agentic AI securely (Cisco AI Readiness Index, 2025).

What to do: Treat automated AI traffic the same way you treat automated infrastructure changes: with monitoring, rate limits, and audit trails. An agent making API calls at 3am with your credentials deserves the same scrutiny as a developer deploying code.

The Cost of Not Looking

Here's the math that matters.

  • Average shadow AI incident cost: $650,000 (IBM, 2025)
  • Enterprise AI security (Netskope, Zscaler): $200-536/user/year (vendor pricing)
  • Network proxy discovery (CitrusGlaze): $10/user/month
  • DNS log analysis: Free

You don't need to spend six figures to know what AI tools your team is using. You need to spend five minutes.

Start with DNS logs today. Deploy a network proxy this week. By Friday, you'll know exactly what's happening — and you'll wonder how you operated without it.


CitrusGlaze is an open-source AI traffic proxy. Install in 5 minutes, see every AI request, block what's dangerous. Get started →