The DailyVibe
Ad TechAITechnologyMixed RealityGuides

Stay ahead of the curve

Ad tech, AI, and emerging technology — delivered daily.

The DailyVibe

Your daily dose of ad tech, AI, technology, and mixed reality.

Sections

  • Ad Tech
  • AI
  • Technology
  • Mixed Reality
  • Guides

More

  • Search
  • Editorial Standards
  • Corrections
  • Transparency
  • Privacy Policy
  • Terms of Service

About

Powered by AI-assisted journalism with human editorial oversight.

© 2026 The Daily Vibe. All rights reserved.Powered by AI-assisted journalism
Novee's AI red team agent attacks your LLM apps so hackers don't have to
Home/AI/Novee's AI red team agent attacks your LLM apps so...
AIMarch 25, 2026· 5 min read

Novee's AI red team agent attacks your LLM apps so hackers don't have to

By Marcus WebbAI-GeneratedAnalysisAuto-published4 sources cited

Novee just launched an autonomous AI agent that pen tests your LLM-powered applications, and based on their Cursor vulnerability disclosure alone, these people know what they're doing.

The company introduced AI Red Teaming for LLM Applications at RSAC 2026 in San Francisco this week. It's currently in beta, with live demos running at booth S-0262. The pitch: point an AI agent at your chatbot, copilot, or autonomous agent workflow, and it will chain together multi-step attacks to find vulnerabilities that static scanners and one-shot prompt testing miss.

What it actually does

Novee's agent doesn't just fire payloads at your app and check for errors. Before running any tests, it gathers context on the target: reads documentation, queries APIs, and builds an internal model of how the application works. Then it tailors its attack strategies to that specific environment.

Gon Chalamish, co-founder and CPO at Novee, described an example where the agent maps an application's role-based access control structure, then probes whether a lower-privileged user can access data restricted to a higher-privileged one. That's not a canned test. That's reconnaissance followed by targeted exploitation, which is how actual attackers operate.

The agent tests for prompt injection, indirect prompt injection, jailbreak attempts, data exfiltration, tool abuse, and agent manipulation. It works with apps built on OpenAI, Anthropic, or open-source models. And it plugs into CI/CD pipelines, so you can run security tests as part of your standard deployment process.

Why traditional pen testing falls short here

Here's the core problem Novee is solving: most enterprise security teams test each application once a year, maybe less. According to Novee, a security team managing 500 applications simply cannot keep pace with manual testing. Meanwhile, LLM applications change continuously. Model updates alter behavior even when no code is deployed. Your annual pen test is stale before the report is finished.

Human pen testers face two constraints. First, they're expensive and scarce. Second, most of them specialize in web and infrastructure testing. Prompt injection and indirect prompt injection aren't part of the standard pen tester toolkit. The attack surface is fundamentally different.

"Attackers are already adapting their techniques for AI systems," Chalamish said. "Security teams need a way to test those systems the same way adversaries attack them."

Ido Geffen, CEO and co-founder, put it more bluntly: "The window between vulnerability and exploitation can shrink to minutes. Defending against that requires continuous testing, not periodic assessments."

Share:

Report an issue with this article

llmai-securityRSAC 2026noveered-teamingdevopspen-testing

Related Articles

ARC-AGI-3 Dropped to Near-Zero. That's the Point.
AI

ARC-AGI-3 Dropped to Near-Zero. That's the Point.

Google's TurboQuant Compresses KV Cache 6x with No Accuracy Loss
AI

Google's TurboQuant Compresses KV Cache 6x with No Accuracy Loss

The preemption clause that could kill 50 state AI laws
AI

The preemption clause that could kill 50 state AI laws

The credibility check

What separates Novee from vaporware is their research output. Their team recently disclosed a vulnerability in Cursor, the popular coding assistant, that allowed attackers to manipulate the context window and achieve full remote code execution on a developer's machine. They have additional findings under responsible disclosure with other vendors.

Findings from that active research feed directly into the agent's training. That's a meaningful differentiator: the agent learns from real-world vulnerability discoveries, not just theoretical attack taxonomies.

The founding team (Geffen, Chalamish, and Omer Ninburg) all come from national-level offensive security operations. They raised $51.5 million within four months of founding the company, with YL Ventures, Canaan Partners, and Zeev Ventures leading.

How it stacks up

The closest comparison is Promptfoo, the open-source LLM red teaming and evaluation framework that reportedly counts 127 Fortune 500 companies among its users. Promptfoo offers red teaming, vulnerability scanning, and CI/CD integration for LLM apps. It's free to start, highly configurable, and has strong community traction.

The difference is approach. Promptfoo runs predefined test suites and adversarial scenarios you configure. Novee's agent operates autonomously: it does its own recon, reasons about the application, and chains attacks together adaptively. Think of Promptfoo as a structured testing harness and Novee as a simulated attacker with its own judgment. Both have a place in a security workflow, and honestly, mature teams will probably use both.

Microsoft's PyRIT is another option for teams that want a free toolkit for adversarial testing, though it's more research-oriented than production-ready.

Pricing and availability

Novee hasn't published pricing yet. The product is in beta, and the company is taking demo requests at novee.security/demo. Chalamish noted that AI pen testing doesn't require a new budget category since security teams already spend on pen testing, red teaming, and vulnerability scanning. The pitch is reallocating from periodic manual engagements to continuous automated testing.

No public rate limits or usage tiers have been disclosed. For a product targeting enterprise security teams with hundreds of applications, I'd expect this to land in the five-to-six-figure annual contract range, but that's speculation until they publish numbers.

The verdict

Wait (but watch closely). Novee's approach is smart, the team has real offensive security credentials, and the Cursor vulnerability disclosure proves they find real bugs. But this is a beta product with no public pricing. If you're running LLM applications in production today, start with Promptfoo (free, open-source, available now) to get baseline coverage. Keep Novee on your shortlist for when they exit beta and publish pricing. If their agent delivers on the autonomous multi-step attack promise at scale, it could be worth serious money for teams managing large portfolios of AI-powered apps.

Marcus Webb covers AI products for The Daily Vibe.

This article was AI-generated. Learn more about our editorial standards