← Back to blog

AI Agents Run With Your Credentials. They Don't Run With Your Judgment.

· Pierre
ai-security agentic-ai credentials agent-security shadow-ai

AI Agents Run With Your Credentials. They Don't Run With Your Judgment.

You gave Claude Code access to your terminal. It can read your files, run shell commands, make API calls, and commit code. It runs as you — your SSH keys, your AWS credentials, your database connection strings, all in scope.

Now multiply that by every developer on your team. Then add the Python scripts calling OpenAI at 3am, the CI/CD pipelines sending code to Claude for review, the MCP servers executing tool calls on behalf of whoever asked.

51% of AI API traffic we intercept comes from automated sources — not humans typing into chat windows (CitrusGlaze Telemetry, 2026). Agents, scripts, and programmatic clients now generate more AI traffic than people do.

And almost nobody is watching what they send.

The Traffic Has Shifted

When AI security started, the threat model was simple: a human pastes a secret into ChatGPT. Stop the paste, stop the leak.

That model is dead.

From 26,565 intercepted AI API requests on a real development machine:

Source Share Human in the Loop?
Node.js (programmatic) 51.4% No
Claude Code (CLI) 21.4% Sometimes
Axios/HTTP libraries 12.4% No
Proxy clients 7.6% No
curl 2.2% Yes
Claude CLI 1.9% Yes
Google API client 1.5% Sometimes
GitHub Copilot 0.6% Passive
Gemini CLI 0.1% Yes

The "sometimes" entries are the interesting ones. Claude Code has a human nearby, but the human isn't reviewing every API request. They're approving tool calls in batches, or running with auto-approve, or stepping away while Claude finishes a task. The human is present but not supervising.

The "no" entries are fully autonomous. Scripts running on a cron job. CI pipelines triggered by a git push. Background agents processing data. They make API calls with someone's credentials and zero oversight.

What Agents Actually Send

Here's what's different about agent traffic versus human chat.

Humans paste snippets. A developer copies 20 lines of code into ChatGPT and asks "why is this failing?" The blast radius is small — the snippet, maybe a file path.

Agents send context. Claude Code reads your entire project to answer a question. It sends file trees, full source files, git diffs, environment configurations, and tool call results. A single agent session can send hundreds of API requests, each containing file contents from your machine.

We see this in the token data. Average output-to-input ratio across all traffic: 4.45:1. The AI generates 4.45x more tokens than it receives. But for agent sessions, the input is massive — entire codebases sent as context, not one-line questions.

That context includes whatever is accessible to the process. If the agent runs as the developer — and it does — it has access to .env files, SSH keys, AWS credentials, database URLs, and every secret in the home directory.

96.4% of detected secrets in AI traffic are API keys and passwords (Nightfall AI, 2025). These aren't passwords typed into chat. They're credentials embedded in files that agents read and send as context.

The Permission Model Is Broken

An AI agent runs as a user-level process. It inherits every permission you have. File access, network access, credential access — all of it.

There's no concept of "this agent should see my code but not my .env file." There's no scope restriction that says "you can call the Anthropic API but not read my AWS credentials." The agent gets everything, because it runs as you.

This is the same problem containers solved for microservices. Before containers, every service on a host could read every other service's files and credentials. The solution was isolation: each service gets its own filesystem, its own network, its own secrets.

We haven't built that for AI agents yet.

Only 29% of organizations feel prepared to securely manage agentic AI (Cisco AI Readiness Index, 2025). That number should be lower. Most of the 29% are overestimating their readiness because their "AI security" is a browser extension that can't even see agent traffic.

The $139 Billion Blind Spot

The agentic AI market is projected to hit $139 billion by 2034, growing at 40.5% CAGR from $9.14 billion in 2026 (Allied Market Research, 2025).

That's a 15x expansion in eight years. Every one of those agents will need credentials to do its job. API keys, OAuth tokens, database passwords, cloud access keys.

And the security industry is still building tools designed for humans in browsers.

Look at the vendor landscape:

Browser extensions (Harmonic, LayerX, Strac): Can't see agents at all. Agents don't use browsers. Zero visibility into 51%+ of traffic.

Cloud proxies (Netskope, Zscaler): Can see agent traffic if you route everything through their cloud. But that means your agent's full context — every file it reads, every credential it accesses — passes through a third party's infrastructure. You're "securing" your data by sending it to someone else.

API wrappers (Portkey, LiteLLM, Langfuse): Only see traffic that goes through their SDK. Agents using direct API calls, custom HTTP clients, or non-supported providers are invisible. And they focus on observability, not security — they'll track your costs but won't catch a leaked AWS key.

Endpoint agents (SentinelOne/Prompt Security, Nightfall): Closest to the right approach, but they require enterprise platforms and enterprise budgets. $200-500/user/year. A 50-person engineering team is looking at $10-25K annually just to see what their AI tools are doing.

None of these were built for the world where half the traffic is automated.

What Agent Security Actually Requires

If you want to secure agents, you need three things that most tools don't provide:

1. Network-layer visibility

You need to see every request, regardless of which tool, SDK, or language makes it. Not "the tools we integrate with" — all of them. The only way to do this is at the network layer, where every HTTP request passes through regardless of its origin.

A local MITM proxy does this. It sits between the application and the internet, terminates TLS, reads the request body, scans for secrets, and forwards or blocks. Works for Node.js agents, Python scripts, Rust CLI tools, Go services, CI/CD pipelines — anything that makes an HTTPS request.

2. Credential detection at request time

It's not enough to scan code repos for secrets (that's what GitGuardian and Trufflehog do). You need to scan what's being sent to AI providers in real time. An agent might read a credential from a file, include it in its context, and send it to Claude or GPT. The secret was never committed to git. It was never in a PR. It existed in a file on disk, and the agent sent it.

210+ secret detection patterns — AWS access keys, GitHub tokens, database connection strings, private keys, Stripe API keys — running in Rust on every request before it leaves the machine. Shannon entropy detection catches novel credential formats that don't match any known pattern.

3. Data stays local

This is the one nobody wants to talk about. Every cloud-based security tool requires your AI prompts to pass through their infrastructure. For human chat, the risk is limited — a few sentences, maybe a code snippet.

For agents, the risk is massive. An agent session can include your entire codebase, your environment variables, your git history, your database schemas. Routing all of that through a third-party cloud "for security" creates a second exfiltration vector in the name of preventing the first one.

Local-first is the only architecture that makes sense for agent traffic. Scan on-device. Alert on-device. Never send the data anywhere except its intended destination.

The Real Risk: Credential Replay

Here's the scenario that should keep security teams up at night.

  1. A developer runs an AI agent that reads project files for context.
  2. The agent reads .env, which contains an AWS access key, a database URL, and a Stripe API key.
  3. The agent includes these files in its context window and sends them to the AI provider.
  4. The prompt (with credentials) is now stored in the AI provider's logs.
  5. If the AI provider is compromised, or the data is used for training, or there's a data retention policy you didn't expect — those credentials are exposed.

This isn't theoretical. 13% of all AI prompts contain sensitive data (Harmonic Security, 2025). And agent prompts contain far more sensitive data than human prompts because agents read entire files rather than pasting snippets.

The fix is straightforward: scan outbound requests for credentials before they leave the machine. Block or redact secrets at the network layer. The agent still works — it just can't accidentally exfiltrate your AWS keys.

What 51% Automated Traffic Means for Your Security Posture

If half your AI traffic is automated and you're only monitoring browser-based chat, here's what you're missing:

  • Every credential an agent reads from disk and includes in context
  • Every file an agent sends to an AI provider for analysis
  • Every tool call an agent makes on your behalf
  • Every background script calling AI APIs on a schedule
  • Every CI/CD pipeline sending code to AI for review

You have zero visibility into whether those automated systems are sending your secrets, your customer data, or your proprietary code to external AI providers. You don't know which providers they're calling, how much they're spending, or what data they're including.

That's not a hypothetical risk. It's your current exposure.

The Path Forward

The agent security problem isn't going to solve itself. Traffic is shifting from human chat to autonomous agents. The tools and credentials available to those agents are expanding. The oversight is not keeping up.

Three things every engineering team should do today:

  1. See what's actually being sent. Run a local proxy for a week. Look at the traffic. You will find secrets in outbound AI requests. Every team that does this finds something.

  2. Scan at the network layer, not the application layer. Application-level scanning misses everything that doesn't go through your approved tools. Network-level scanning catches everything.

  3. Keep your data local. If your "security" tool requires sending your AI traffic through someone else's cloud, you've added a second data exposure point. Choose tools where scanning happens on-device.

The agents are already running. They're running with your credentials. They just aren't running with your judgment.


Try It

CitrusGlaze is a local MITM proxy that sees every AI request on your machine — browser, terminal, SDK, agent. Secret detection. Cost tracking. Shadow AI discovery. Data never leaves your device.

Install in 5 minutes: bash install.sh

See what your AI agents are actually sending.

citrusglaze.dev

Install CitrusGlaze free and see what your AI agents actually send.

Scan yours free