← Back to blog

$430M in One Week: 6 AI Security Startups Launched — None of Them Can See Your Prompts

· Pierre
ai-security startups funding governance-theater prompt-visibility agent-security

$430M in One Week: 6 AI Security Startups Launched — None of Them Can See Your Prompts

Between March 3 and March 12, 2026, six cybersecurity startups emerged from stealth and collectively raised over $430 million. All of them claim to secure AI. Most of them can't tell you what's inside an AI prompt.

I've been tracking this space obsessively while building CitrusGlaze, so I went through every launch announcement, product page, and architecture diagram. Here's what I found.

The March 2026 Launch Wave

Company Funding Launch Date What They Actually Do
Armadin $189.9M Mar 10 Autonomous AI red-teaming agents
Kai $125M Mar 11 Agentic AI for threat detection and response
Onyx Security $40M Mar 12 AI agent governance and control plane
Fig Security $38M Mar 3 Security change management
JetStream Security $34M Mar 9 AI asset mapping and governance graphs
Geordie AI $6.5M + $5M RSAC prize Mar 23 (presenting) AI agent discovery and behavior monitoring

Total: $438.4M in combined funding.

Sources: TechCrunch — Armadin, SecurityWeek — Kai, TechStartups — Onyx, TechCrunch — Fig Security, Security Boulevard — Geordie AI

Every single one has a credible founding team. Kevin Mandia (Mandiant, sold to Google for $5.4B) at Armadin. Galina Antova (Claroty, $3B OT security company) at Kai. Unit 8200 and Nvidia alumni at Onyx. Ex-Snyk CTO at Geordie.

The money is real. The talent is real. And the market timing is right — 88% of organizations reported AI agent security incidents last year, per Gravitee's 2026 report.

So what's the problem?

Most of These Companies Can't See Your Data

Here's the uncomfortable truth about the March launch wave: the majority of these startups operate at the governance layer, not the data layer.

Let me be specific.

Armadin builds autonomous red-teaming agents. It simulates attacks on your systems. Valuable? Absolutely. Does it know that your developer just pasted an AWS secret key into Claude Code? No. It's offensive security, not traffic inspection.

Kai uses agentic AI for threat detection and incident response across IT and OT environments. It's defending your infrastructure from attackers. It's not watching what your own AI tools send to OpenAI and Anthropic. Different problem entirely.

Fig Security helps security teams manage infrastructure changes. It's not AI security at all — it's security operations tooling that launched during the AI security hype cycle.

JetStream Security maps relationships between agents, models, data, tools, and identities using "AI Blueprints." It's a governance graph. It can tell you which agents exist and what they're connected to. It cannot tell you what's inside the request an agent just sent to Claude, or whether that request contained your production database credentials.

Onyx Security comes closest to real traffic visibility. Their Onyx Guardian Agent monitors AI agent behavior and can block unsafe actions. But it operates as a control plane that requires integration with each AI system — it's not network-level interception. If an agent uses a tool or provider Onyx doesn't integrate with, it's blind.

Geordie AI discovers and monitors AI agents across your enterprise. It's valuable for inventory ("how many agents do we have?") but it's behavioral monitoring, not content inspection.

The Governance Layer vs. the Data Layer

There's a pattern here. Almost all of the money flowing into AI security right now is going to the governance layer: asset discovery, policy definition, relationship mapping, behavior monitoring.

None of it is going to the data layer: what is actually inside the AI requests your organization sends, right now, today?

This matters because the actual threats are at the data layer:

You can have the most beautiful governance graph in the world, mapping every agent to every model to every identity. But if you can't read what's inside the HTTP request body, you don't know if that agent just exfiltrated your customer database.

Governance without visibility is a compliance checkbox. It's not security.

Why Governance Gets Funded and Visibility Doesn't

Venture capital loves governance-layer companies for three reasons:

1. They're easier to sell. "We'll map all your AI assets and give you a dashboard" is a conversation every CISO can have. "We'll MITM-proxy all your AI traffic" makes procurement teams nervous, even though it's the only approach that actually works for CLI tools, API calls, and agent frameworks.

2. They don't touch data. Governance tools can be SOC 2 compliant on day one because they never see the actual prompts. This is great for the vendor's sales cycle and terrible for the customer's actual security.

3. They scale to enterprise pricing. Asset discovery and governance graphs are features you can charge $50K/year for. Traffic inspection at the device level is something you can do with an open-source proxy for free.

This creates a perverse incentive: the products that are easiest to sell and fund are the ones that provide the least actual protection against the threats that matter.

What Actually Stops a Secret From Leaking?

Let me walk through a real scenario.

A developer on your team opens Claude Code and asks it to debug a failing test. Claude Code sends a request to Anthropic's API containing the test file, the error log, and — because the error log includes a database connection string — the credentials to your production PostgreSQL database.

What stops this?

Approach Does It Stop the Leak? Why / Why Not
Governance graph (JetStream, Geordie) NO Knows the agent exists, doesn't see request content
AI red teaming (Armadin) NO Tests for attack resilience, doesn't monitor live traffic
Threat detection (Kai) NO Watching for external attackers, not insider data flow
Agent control plane (Onyx) MAYBE Only if integrated with Claude Code specifically
Browser extension (Harmonic, Nightfall) NO Claude Code is a CLI tool, not a browser
Cloud proxy (Netskope, Zscaler) YES But routes your prompt through their cloud to do it
Local MITM proxy (CitrusGlaze) YES Intercepts the request locally, scans for secrets, blocks or redacts before it leaves the device

The only approaches that actually stop the credential from reaching Anthropic are the ones that inspect the HTTP request body. And of those, only a local proxy does it without sending your prompt to another cloud.

The Visibility Stack You Actually Need

If I were a CISO evaluating AI security tooling in March 2026, here's what I'd actually want:

Layer 1: Traffic inspection. Something that sits at the network level and reads every AI request body. Detects secrets, classifies sensitive data, blocks critical leakage. Must work with CLI tools, API calls, browser-based tools, and agent frameworks. Must run locally — your prompts are the sensitive data, sending them to another cloud for "protection" is the threat model eating its own tail.

Layer 2: Cost and usage visibility. Token counts, provider distribution, model usage, per-team attribution. You can't negotiate volume discounts or set budgets without this.

Layer 3: Agent governance. Discovery, identity management, permission scoping, behavioral monitoring. This is where the $430M companies live. It's valuable — but only after you have Layer 1 and Layer 2.

The March launch wave funded Layer 3 heavily. Layer 1 and Layer 2 are still underserved.

The Numbers That Actually Matter

Here's how to think about this if you're allocating security budget:

  • $430M — what VCs just put into governance-layer AI security startups in one week
  • $670K — the extra cost of a shadow AI data breach vs. a standard breach (IBM, 2025)
  • 88% — organizations that had an AI agent security incident last year (Gravitee, 2026)
  • 47% — deployed AI agents running with zero monitoring (Gravitee, 2026)
  • 5 minutes — time to deploy CitrusGlaze and start seeing every AI request on your network

The funding gap is your opportunity gap. While everyone else is building governance dashboards, you can deploy actual traffic visibility today, for free, without routing your data through anyone else's cloud.

What I'd Tell the VCs

If I had five minutes with the partners who wrote those $430M in checks, I'd say this:

Governance is necessary. I'm glad it's getting built. But if you look at the Gravitee data — 47% of agents running dark, 45.6% authenticated with shared API keys, 88% incident rate — the problem isn't that organizations lack a map of their AI assets. The problem is that they can't see what those assets are actually doing.

The governance layer tells you what exists. The data layer tells you what's happening. And right now, almost all the money is going to the first one.

The next wave of AI security will be built at the wire — inspecting actual traffic, at the device level, without introducing another cloud dependency. That's what we're building at CitrusGlaze.


CitrusGlaze is an open-source AI traffic proxy that provides security scanning, cost tracking, and observability for every AI API call — locally, in 5 minutes, without routing your data through the cloud. Try it free.

#AISecurity #StartupFunding #AgentSecurity #CISO #AIGovernance

Install CitrusGlaze free — the only AI security tool that actually sees what's inside your prompts, runs locally, and costs less than lunch.

Scan yours free