← Back to blog

Your Developer Leaked an AWS Key to Claude. Here's Your 60-Minute Playbook.

· Pierre
incident-response ai-security secret-leakage credentials playbook

Your Developer Leaked an AWS Key to Claude. Here's Your 60-Minute Playbook.

It's 2:47pm on a Tuesday. A developer pastes a .env file into Claude to debug a deployment issue. The file contains an AWS access key, a database connection string, and a Stripe API token. The prompt hits Anthropic's API and is now sitting in their systems.

This isn't hypothetical. 13% of all AI prompts contain sensitive data, according to analysis of millions of enterprise prompts (Lasso Security, 2025). 96.4% of detected secrets in AI traffic are API keys and passwords — the exact credentials that enable lateral movement (Nightfall AI, 2025).

Most security teams have a playbook for leaked credentials in a GitHub commit. Almost none have one for credentials leaked to an AI provider. Here's yours.


Minute 0-5: Confirm and Classify

What happened. An employee sent credentials to an AI provider via prompt, file upload, or embedded code context.

Your first three questions:

  1. What was leaked? AWS keys, database credentials, API tokens, private keys, connection strings — each has different blast radius.
  2. Which AI provider received it? This determines your data retention exposure. Anthropic retains API prompts for 30 days by default. OpenAI retains for 30 days. Google's retention varies by product.
  3. Was this through a sanctioned or shadow AI tool? If it was a personal ChatGPT account, you have less recourse than an enterprise API contract.

45.4% of sensitive AI prompts are sent through personal accounts, bypassing corporate controls entirely (Harmonic Security, 2025). If your developer was using a personal account, the provider may not even acknowledge your deletion request.

Action items:

  • Pull the exact prompt content if you have proxy logs (this is why network-layer visibility matters)
  • Identify every credential in the leaked content
  • Classify severity: Critical (production credentials), High (staging/development credentials with network access), Medium (low-privilege tokens), Low (expired or scoped tokens)

Minute 5-15: Rotate Everything

Do not investigate further before rotating. The leaked credential is live. Rotate first, understand later.

For each leaked credential type:

Credential Rotation Command Notes
AWS Access Key aws iam create-access-key then aws iam delete-access-key --access-key-id AKIA... Check for attached policies first — know what this key could do
Database password Rotate in your secrets manager, update connection strings If it's a connection string with host:port, check access logs for that host
Stripe API key Roll key in Stripe Dashboard → Developers → API keys Check for recent charges from unknown sources
GitHub PAT Revoke in Settings → Developer settings → Personal access tokens Check audit log for any repo access you don't recognize
Private key (RSA/EC) Regenerate key pair, update all systems using the public key If SSH key: check authorized_keys on all servers
Generic API token Regenerate in the provider's dashboard Varies by service

Do this for every credential in the leaked content. Not just the ones you think are important. A Slack webhook URL feels low-risk until someone uses it to post phishing links to your engineering channel.

97% of organizations using AI lacked access controls to prevent AI-related data breaches (IBM Cost of Data Breach, 2025). If you're in this situation, you're not alone — but you still need to move fast.


Minute 15-30: Assess Blast Radius

Now that credentials are rotated, figure out what could have happened between the leak and the rotation.

Check these logs:

  1. Cloud provider audit logs. AWS CloudTrail, GCP Cloud Audit Logs, Azure Activity Log. Filter by the leaked credential. Look for any API calls you don't recognize.
  2. Application access logs. If a database connection string leaked, check the database's connection log for unfamiliar source IPs.
  3. Git provider audit log. If a PAT leaked, check for repository clones, webhook creations, or permission changes.
  4. Payment provider logs. If payment credentials leaked, check transaction history.

What you're looking for:

  • Any use of the credential from an IP or user-agent you don't control
  • Any new resources created (EC2 instances, Lambda functions, S3 buckets)
  • Any data accessed that the credential had permission to read
  • Any elevated permissions or new credentials created using the leaked one

If you find unauthorized activity: this is now a full security incident, not just a credential rotation. Engage your incident response team, preserve logs, and consider whether you have notification obligations (GDPR, state breach laws, customer contracts).


Minute 30-45: Request Deletion from the AI Provider

Contact the AI provider to request deletion of the prompt containing credentials.

Provider-specific guidance:

Anthropic (Claude): API prompts are retained for 30 days for abuse detection, then deleted. Enterprise customers can request zero retention. File a deletion request through your account team or support. If using Claude.ai (consumer), data may be used for training unless you opt out.

OpenAI: API data is retained for 30 days (not used for training by default). ChatGPT data is used for training unless opted out. Enterprise and Team tiers have zero retention. Request deletion via [email protected] or your account manager.

Google (Gemini): Retention varies by product and tier. Gemini API in Vertex AI has enterprise data governance. Consumer Gemini has different terms. Check your specific agreement.

What to include in your deletion request:

  • Timestamp of the request (exact, with timezone)
  • API key or account that made the request (to help them locate it)
  • Description of the sensitive content (without repeating the actual secrets)
  • Reference to their data processing agreement if you have one

Be realistic about what this accomplishes. Deletion removes the data from the provider's active systems. It doesn't guarantee removal from all backups, and it doesn't un-train any model that may have already processed the data. This is damage limitation, not damage reversal.


Minute 45-60: Document and Prevent

Document the incident:

  • What was leaked and when
  • Which AI tool and account were used
  • Time to detection (how did you find out?)
  • Time to rotation
  • Whether any unauthorized use was detected
  • Root cause (why did the developer have these credentials in a file they were debugging?)

That last question matters most. The developer didn't maliciously leak credentials. They were debugging, and the fastest path was to paste context into an AI. This will happen again tomorrow unless you change the environment.

Prevention measures, in order of effectiveness:

  1. Network-layer interception. A MITM proxy that scans every outbound AI request for secrets — and blocks or redacts them before they reach the provider. This is the only approach that works across all AI tools: chat UIs, CLI tools, SDKs, scripts, and agent frameworks.

  2. Credential management. If developers have long-lived credentials in .env files, they will eventually leak them. Move to short-lived credentials (AWS STS, OIDC federation), secrets managers (Vault, AWS Secrets Manager), and just-in-time access.

  3. AI tool governance. Know which AI tools your team uses. 81% of employees use unapproved AI tools (UpGuard, 2025). If you don't have visibility, you can't protect what you can't see.

  4. Developer education. Teach your team about the risk — not by banning AI tools, but by giving them safe ways to use them. A developer who knows their AI traffic is scanned for secrets is a developer who doesn't need to worry about accidentally leaking one.


The Uncomfortable Math

The average data breach costs $4.44 million. Shadow AI adds $670,000 to that figure (IBM Cost of Data Breach, 2025).

The average enterprise sees 223 policy violations per month involving sensitive data sent to AI apps — and that number doubled year-over-year (Netskope Cloud & Threat Report, 2025).

This isn't a one-time incident. It's a recurring exposure pattern. You can run this playbook 223 times a month, or you can put something at the network layer that catches secrets before they leave.


The Difference Between Detection and Prevention

If you're reading this playbook because a secret already leaked, you're in detection mode. Everything above is damage control.

Prevention means catching the secret before it reaches the AI provider. That requires inspection of the actual request body — not just the destination URL. A traditional firewall sees "employee sent a request to api.anthropic.com." A MITM proxy sees "employee sent a request to api.anthropic.com containing an AWS access key starting with AKIA, a PostgreSQL connection string with production credentials, and a Stripe live API key."

That's the difference between knowing your developer used Claude and knowing they sent your production database credentials to Claude.

CitrusGlaze runs a MITM proxy with 254+ secret detection patterns, written in Rust for wire-speed performance. It sits at the network layer and scans every AI API request — from chat UIs, CLI tools, SDKs, agent frameworks, and everything else that makes HTTPS calls to AI providers. Critical secrets can be blocked or redacted before they ever leave your device.

Install takes 5 minutes. Your prompts never leave your machine. And you never have to run this playbook again.


Every statistic in this post is cited with its source. First-party data is from CitrusGlaze telemetry (26,565 intercepted AI requests). See our State of AI Traffic 2026 report for the full dataset.

Install CitrusGlaze and catch secret leaks before they reach the AI provider.

Scan yours free