What Claude Code Actually Sends: A Traffic Analysis
What Claude Code Actually Sends: A Traffic Analysis
Claude Code is the single largest source of AI traffic in our environment.
Out of 26,565 intercepted AI requests across 19 applications, 5,685 came from Claude Code — 21.4% of all AI traffic. More than OpenAI, Google, GitHub Copilot, and all other providers combined (CitrusGlaze Telemetry).
I use Claude Code daily. It writes real code, runs shell commands, reads and edits files across entire projects. It's the most capable AI coding assistant I've used.
But I also run a MITM proxy on everything that leaves my machine. So I can tell you exactly what Claude Code sends to Anthropic, how much data it ships per request, and what it picks up along the way.
How We Captured This
CitrusGlaze runs a local MITM proxy on port 8888. When Claude Code makes API calls to api.anthropic.com, the proxy decrypts the HTTPS request, logs the full request body, scans it for secrets, counts tokens, and forwards it to Anthropic. The same happens in reverse for responses.
No data leaves the machine. No cloud routing. We're inspecting our own traffic on our own device.
Claude Code identifies itself in the User-Agent header, making attribution straightforward. Every request is logged with timestamp, source application, provider, model, token counts, and any secrets detected.
The Request Structure
Every Claude Code API call is a POST to https://api.anthropic.com/v1/messages. The request body follows the Messages API format:
{
"model": "claude-opus-4-20250514",
"max_tokens": 16384,
"system": [
{"type": "text", "text": "...system prompt..."},
{"type": "text", "text": "...tool definitions..."}
],
"messages": [
{"role": "user", "content": "..."},
{"role": "assistant", "content": "..."},
...
],
"tools": [...]
}
Three things stand out.
1. The System Prompt Is Large
Claude Code's system prompt includes detailed instructions for its behavior, tool usage patterns, safety guidelines, and context about the project. It also includes the contents of any CLAUDE.md files in your project hierarchy.
This means the system prompt grows with your project configuration. If you have a detailed CLAUDE.md (and you should — it makes Claude Code much more effective), that entire file gets sent with every request.
2. The Context Window Accumulates
Claude Code maintains conversation context. Each subsequent request in a session includes the full conversation history — your messages, Claude's responses, tool call results, file contents that were read, command outputs.
Early requests in a session are small. By request 10 or 20, you're sending megabytes of accumulated context. Our telemetry shows Claude Code consuming 33.8 million input tokens across its requests, heavily weighted toward later turns in longer sessions (CitrusGlaze Telemetry).
This is normal for how the Anthropic Messages API works. But it means that a file you read at the start of a session — including any secrets in that file — gets sent to Anthropic on every subsequent request in that session.
3. Tool Calls Expose Your Filesystem
Claude Code uses tools to read files, write files, run bash commands, search your codebase, and more. Every tool call result gets added to the conversation context.
When Claude Code reads a file, the full file content becomes part of the conversation. When it runs ls or find, the directory listing goes to Anthropic. When it runs a build command, the full output (including paths, environment variables in error messages, and sometimes credentials in stack traces) becomes part of the context.
This is what makes Claude Code powerful — it can see your actual code. It's also what makes traffic analysis worth doing.
What It Sends to Anthropic
Here's a breakdown of what we observe in Claude Code API requests:
Source Code
This is expected. You're using Claude Code to write and edit code. It reads your files, you discuss them, it proposes changes. Your source code goes to Anthropic.
What's less obvious: Claude Code's context engine is aggressive. When you ask it to fix a bug in one file, it often reads several related files to understand the context. Each of those files is now in the conversation and gets re-sent on every subsequent request.
File Paths and Directory Structure
Every read_file, list_directory, and search_files tool call sends your project structure to Anthropic. File paths reveal project names, internal naming conventions, and organizational structure.
Shell Command Output
When Claude Code runs a command (build, test, git status, etc.), the full stdout and stderr go into the conversation context. Build outputs frequently include:
- Absolute file paths on your machine
- Environment variable names (and sometimes values, especially in error messages)
- Package versions and dependency trees
- Network configurations in test output
Git History
Claude Code reads git logs, diffs, and blame output. This includes commit messages, author names, email addresses, and the full diff of changes.
CLAUDE.md Configuration
If you have CLAUDE.md files (project instructions), their entire contents are sent as part of the system prompt on every request. These files often contain internal documentation, architecture notes, deployment procedures, and sometimes references to internal systems.
The Token Economics
From our telemetry across 5,685 Claude Code requests:
| Metric | Value |
|---|---|
| Total requests | 5,685 |
| Share of all AI traffic | 21.4% |
| Primary model | Claude Opus 4.5 |
Claude Code overwhelmingly uses Opus — the most expensive model in the Anthropic lineup. At Opus pricing ($15/million input tokens, $75/million output tokens), costs accumulate quickly during long coding sessions.
The output-to-input token ratio across all our Anthropic traffic is 4.45:1 (CitrusGlaze Telemetry). Claude Code generates substantially more text than it receives, consistent with code generation workloads where a short instruction produces pages of code.
What Secrets Does It Pick Up?
This is where it gets interesting.
Claude Code doesn't intentionally seek out secrets. But its context engine reads files, runs commands, and accumulates history. Secrets end up in the context window through normal development workflows:
1. .env files read as context. If Claude Code reads a .env file (or a file that imports from .env), those values are in the conversation for the rest of the session.
2. Credentials in error output. A failed database connection prints the connection string. A failed API call shows the authorization header. Claude Code runs the command, captures the output, sends it to Anthropic.
3. Private keys in config directories. Asking Claude Code to help with SSH configuration or TLS setup? It might read your actual key files if they're in the project directory.
4. API keys in source code. Despite best practices, developers hardcode API keys. When Claude Code reads those files for context, the keys go to Anthropic.
5. Tokens in git history. git diff output can include removed credentials. Even if you've already rotated the key, it's now in Claude Code's context.
Nightfall AI's research shows 96.4% of detected secrets in AI traffic are API keys and passwords (Nightfall AI, 2025). These are the credentials most likely to enable lateral movement if they end up in the wrong place.
What This Means for You
I'm not writing this to scare you away from Claude Code. I use it every day. It's extraordinarily productive.
I'm writing this because knowing what your tools send is a prerequisite for using them safely. And right now, most developers have zero visibility into this.
If you use Claude Code (or Cursor, or Copilot, or any AI coding assistant):
Know what's in your context. The files Claude Code reads become part of every subsequent API request. Be aware of what's in your project directory.
Keep secrets out of your project tree. Use a secrets manager. Reference environment variables instead of hardcoding values. If your
.envfile is in the project root, Claude Code will eventually read it.Watch your session length. Longer sessions accumulate more context. The file you read 30 minutes ago is still being sent to Anthropic on every request. Start new sessions periodically for sensitive work.
Use
.claudeignore. Claude Code supports.claudeignorefiles (like.gitignore) to exclude files from its context. Add your secrets files, private keys, and any sensitive configuration.Monitor your traffic. This is the part where I mention what I built. CitrusGlaze scans every AI request for 210+ secret patterns in real-time. It catches the AWS key in the error output, the database password in the connection string, and the GitHub token in the git diff — before they leave your machine.
The Bigger Picture
Claude Code accounts for 4% of all GitHub commits (Anthropic, 2025). GitHub Copilot has 20 million users generating 46% of code in files where it's active (GitHub Octoverse, 2025). 42% of all new code is AI-assisted (Sonar, 2025).
AI coding assistants are not optional tools anymore. They're infrastructure. And infrastructure needs monitoring.
The enterprise security market knows this — Palo Alto, Cisco, Check Point, and SentinelOne collectively spent over $700 million acquiring AI security startups in 2025 alone. But their products cost $200-536/user/year, require cloud routing (your prompts go through their infrastructure), and take months to deploy.
There's a simpler approach: a local proxy that sees everything, costs $10/user/month, and installs in 5 minutes.
# See what your AI tools are actually sending
bash install.sh
citrusglaze start
Your AI tools are powerful. They're also talkative. Know what they're saying.
Sources: CitrusGlaze Telemetry (26,565 intercepted requests), Nightfall AI 2025, Anthropic 2025, GitHub Octoverse 2025, Sonar State of AI in Code 2025
CitrusGlaze is an open-source AI traffic proxy. See every AI request. Block what's dangerous. Keep what's productive.