← Back to blog

Why Browser Extensions Can't Secure AI: The CLI Blind Spot

· Pierre
ai-security browser-extensions shadow-ai cli-tools network-security

Why Browser Extensions Can't Secure AI: The CLI Blind Spot

Harmonic Security. Nightfall AI. LayerX. Strac. They all sell the same thing: a browser extension that watches what you paste into ChatGPT and stops you from sharing secrets.

It's a reasonable idea — for 2024.

In 2026, more than half of all AI API requests never touch a browser. Browser extensions are guarding the front door while 51% of traffic walks through the back.

The Numbers

We run a MITM proxy that intercepts every AI API request on a machine — browser, terminal, SDK, script, agent. All of it.

From 26,565 intercepted requests (CitrusGlaze Telemetry, 2026):

Source Requests Share
Node.js (programmatic) 13,655 51.4%
Claude Code (CLI) 5,685 21.4%
Axios (HTTP library) 3,284 12.4%
Proxy clients 2,029 7.6%
curl 578 2.2%
Claude CLI 500 1.9%
Google API client 411 1.5%
GitHub Copilot 155 0.6%
Gemini CLI 30 0.1%

51.4% of AI traffic comes from Node.js — scripts, agents, and automated pipelines running outside any browser. Another 21.4% comes from Claude Code, a terminal application. 12.4% comes from Axios, an HTTP client library embedded in apps.

A browser extension sees zero of this.

What Browser Extensions Actually See

Let's be specific about what falls inside and outside a browser extension's field of view.

Visible to Browser Extensions

  • ChatGPT web app (chat.openai.com)
  • Claude web app (claude.ai)
  • Gemini web app (gemini.google.com)
  • Perplexity, DeepSeek, and other browser-based AI chat interfaces
  • Copy-paste into any AI web interface

Invisible to Browser Extensions

  • Claude Code — runs in your terminal. Sends system prompts, file contents, and tool calls directly to Anthropic's API. 21.4% of AI requests in our data.
  • GitHub Copilot — runs as a VS Code/JetBrains extension. Not a web page. Uses a dedicated API endpoint. 0.6% of requests, but rising fast.
  • Cursor — Electron app with its own HTTP stack. The browser extension in Chrome has no idea Cursor is making API calls.
  • Python scripts using openai or anthropic SDK — every developer writing agents, automations, or one-off scripts. Shows up as generic Node.js/Python HTTP clients in traffic.
  • CI/CD pipelines — GitHub Actions, Jenkins, or custom scripts calling AI APIs during builds.
  • curl and wget — developers testing API calls from terminal.
  • MCP (Model Context Protocol) tool servers — local processes making AI API calls on behalf of the user.
  • Any custom agent — LangChain, CrewAI, AutoGPT, custom scripts. All make direct API calls.

The split is stark. Browser extensions monitor the web interface — the portion of AI usage that's already the most visible and least risky. The invisible portion — CLI tools, SDKs, agents, and scripts — is where secrets actually leak, where API keys get pasted, and where automated systems run with human credentials and zero human oversight.

The Agent Problem

This isn't just about today. The trajectory makes browser extensions less relevant every month.

The agentic AI market is projected to grow from $9.14 billion in 2026 to $139 billion by 2034, a 40.5% compound annual growth rate (Allied Market Research, 2025).

Agents don't use browsers. They use API calls.

An AI agent running on your laptop — making API calls to Anthropic, reading your files, executing shell commands, calling external APIs — is completely invisible to a browser extension. It doesn't render HTML. It doesn't have a DOM. There's no page for the extension to inject into.

Only 29% of organizations feel prepared to securely manage agentic AI (Cisco AI Readiness Index, 2025). Part of the problem is that their security tools were designed for humans using web apps, not agents making API calls.

"But We Also Have an Endpoint Agent"

Some vendors add a lightweight endpoint agent alongside their browser extension. SentinelOne (via their Prompt Security acquisition) does this. Nightfall has a desktop agent too.

Here's the issue: endpoint agents that monitor AI traffic need to do one of two things.

  1. Hook into every HTTP client library on the system — intercept calls from Python's requests, Node's fetch, Go's net/http, Rust's reqwest, and every other language and framework. This is fragile, language-dependent, and breaks when libraries change.

  2. Operate at the network layer — intercept traffic at the TCP/TLS level before it leaves the machine. This is what a MITM proxy does.

Option 2 is the right approach. But if you're doing option 2, you don't need the browser extension. The network layer already catches browser traffic — and everything else.

Most endpoint agents take a half-measure: they monitor a fixed list of known AI domains, check process names, and try to correlate. The result is a lot of heuristics and a lot of gaps.

What Actually Leaks, and From Where

Here's what matters to a security team: where do secrets and sensitive data actually end up in AI prompts?

96.4% of detected secrets in AI traffic are API keys and passwords (Nightfall AI, 2025). These are the credentials that enable lateral movement — an attacker who gets an AWS key from an AI prompt can access your cloud infrastructure.

Now think about where developers are most likely to paste an API key into an AI prompt:

  • Terminal (Claude Code, Cursor): "Here's my .env file, help me debug why the API isn't connecting" — developer pastes full environment variables including secrets.
  • Python/Node script: Developer hardcodes an API key in a script, then the script sends the full source file as context to an AI API for code review.
  • CI/CD pipeline: A build step sends a code snippet to an AI API for automated review — the snippet happens to include a connection string.

All of these happen outside the browser.

45.4% of sensitive AI prompts are sent through personal accounts, bypassing corporate controls entirely (Harmonic Security, 2025). A developer running claude in their personal terminal, with their personal API key, sending your company's source code — no browser extension will ever see that.

The Vendor Landscape

Let's look at who ships what.

Browser Extension Only

Vendor Funding/Backing What They Miss
Harmonic Security VC-funded startup All CLI tools, SDKs, agents, scripts. By their own data model: browser-only.
LayerX VC-funded, Gartner-recognized Same. Chrome/Edge only.
Strac Smaller startup Same. Multi-browser support, but still browser-only.

Browser + Partial Endpoint

Vendor Funding/Backing Coverage Gaps
Nightfall AI VC-funded, G2 #1 DLP Desktop agent is supplementary, not primary. CLI coverage incomplete.
SentinelOne/Prompt Security $250M acquisition Requires full SentinelOne platform. Enterprise-only.

Cloud Proxy (Full Traffic Visibility, But Data Leaves Network)

Vendor Price Range Deployment Time Your Data Goes Through Their Cloud
Netskope $200-536/user/year Weeks to months Yes
Zscaler $72-375/user/year Weeks to months Yes
Palo Alto (Prisma AIRS) Enterprise custom Months Depends on config

Local Network Proxy (Full Traffic Visibility, Data Stays Local)

Vendor Price Deployment Time Your Data Leaves?
CitrusGlaze $69/year 5 minutes No

The cloud proxy vendors (Netskope, Zscaler) do see everything. But they see it by routing all your traffic through their infrastructure. Your AI prompts — containing your source code, your secrets, your customer data — pass through their servers for inspection.

There's an irony in "securing" your AI data by sending it through yet another third party's cloud.

What a Network-Layer Proxy Sees

A MITM proxy running locally on the developer's machine sits between every application and the internet. When any process — browser, terminal, SDK, script, agent — makes an HTTPS request to an AI provider, the proxy:

  1. Terminates the TLS connection locally
  2. Reads the full request body (prompt, system prompt, tool calls, file uploads)
  3. Scans for secrets, credentials, PII, and policy violations
  4. Forwards the request to the AI provider (or blocks it)
  5. Captures the response for logging and analysis

This works regardless of:

  • Which programming language the client uses
  • Whether the client is a browser, CLI, IDE, or daemon process
  • Whether the developer knows the tool exists
  • Whether the AI tool is sanctioned or shadow IT

The scanning happens in Rust. 210+ secret patterns. Shannon entropy detection for unknown credential formats. Zero network roundtrip — everything happens on-device.

The Real Objection: "Won't a MITM Proxy Break Things?"

Fair question. MITM proxies have a reputation for being finicky. Certificate trust, HTTP/2 handling, streaming responses, WebSocket support — lots of things can go wrong.

We've tested CitrusGlaze with 39 AI tools and verified compatibility:

  • Claude Code, Claude CLI, Claude web
  • GitHub Copilot (VS Code and JetBrains)
  • Cursor
  • ChatGPT (web and API)
  • Gemini (web and API)
  • pip, npm, cargo, brew (package managers)
  • All major AI SDKs (openai, anthropic, google-generativeai)
  • MCP servers and tool calling

15 out of 15 end-to-end compatibility tests pass. No one else in this market has a public e2e test suite for AI tool compatibility.

The main hurdle is certificate trust: the proxy generates a local CA certificate, and every tool needs to trust it. For most tools, it's one environment variable (NODE_EXTRA_CA_CERTS for Node, REQUESTS_CA_BUNDLE for Python, SSL_CERT_FILE as a universal fallback). Our installer handles this automatically.

What This Means for Security Teams

If you're evaluating AI security tools, ask one question: does it see all the traffic, or just the browser traffic?

If the answer is "just browser," you're accepting a 51%+ blind spot. You're securing ChatGPT conversations while Claude Code sends your entire codebase to Anthropic unmonitored. You're catching the intern who pastes a password into Claude.ai while the senior engineer's Python script uploads credentials from .env files every time it runs.

The shift from browser-based AI to CLI/SDK/agent-based AI is accelerating. 84% of developers use or plan to use AI coding tools (Stack Overflow, 2025). Claude Code accounts for 4% of all GitHub commits (Anthropic, 2025). Agents are projected to be a $139B market by 2034.

Browser extensions were the right tool in 2024. The traffic has moved. Your security should move with it.


Try It

CitrusGlaze is a local MITM proxy that sees every AI request on your machine — browser, terminal, SDK, agent. Secret detection. Cost tracking. Shadow AI discovery. Data never leaves your device.

Install in 5 minutes: bash install.sh

See what your AI tools are actually sending.

citrusglaze.dev

Install CitrusGlaze free and see what your AI tools actually send.

Scan yours free