Votal AI Unleashes AI Attacker to Fortify Autonomous Systems
- AI attacker model operates with 20x the throughput of a human red team
- Open-source catalog includes over 35 security categories and 185 named techniques
- Votal AI's CART platform models attacks across a 7-stage Agentic AI Kill Chain
Experts agree that continuous, automated testing is essential for securing autonomous AI systems, as traditional defenses are inadequate against evolving threats.
Votal AI Unleashes AI Attacker to Fortify Autonomous Systems
SAN FRANCISCO, CA – March 19, 2026 – As enterprises race to deploy autonomous AI agents, cybersecurity firm Votal AI today unveiled a new offensive strategy to build better defenses. The company launched an AI model trained to think like an attacker and an open-source catalog of attack techniques, aiming to secure the next generation of agentic AI systems before they are widely compromised.
The announcement from the firm, founded by cybersecurity veterans Bobby Gupta and Jyotirmoy Sundi, comes just days before the RSA Conference 2026, the industry's largest gathering, where the security of AI is expected to dominate the conversation. Votal AI is positioning its Continuous Agentic Red Teaming (CART) platform as a critical tool for organizations navigating the treacherous landscape of AI-driven automation.
The New Frontier of AI Risk
The rapid adoption of agentic AI—systems that can autonomously make decisions, use software tools, and execute actions across corporate networks—represents a paradigm shift in both productivity and risk. Unlike traditional Large Language Models (LLMs) that primarily respond to prompts, agentic AI can independently orchestrate complex workflows, query sensitive databases, and even authorize transactions. This autonomy dramatically expands the attack surface, creating novel vulnerabilities that legacy security measures are ill-equipped to handle.
Experts warn that a single successful breach of an AI agent could have cascading consequences, from unauthorized data exfiltration and financial theft to cross-tenant data contamination in cloud environments. The non-deterministic nature of these systems means their behavior can be unpredictable, making traditional, point-in-time security assessments or penetration tests largely obsolete.
"The industry is grappling with a fundamental challenge: how do you secure a system that learns, evolves, and acts on its own?" noted one independent cybersecurity analyst. "Static defenses and periodic check-ups are no longer sufficient. The threat is continuous, so the defense must be as well." This sentiment is echoed across the security community, where there is a growing consensus that continuous, automated testing is not just beneficial but essential for managing the risks associated with high-stakes AI deployments in regulated industries like finance, healthcare, and manufacturing.
An AI to Fight AI: Votal's Adversarial Approach
Votal AI's answer to this challenge is centered on a new, formidable tool: an RLHF-trained adversarial attacker model. RLHF, or Reinforcement Learning from Human Feedback, is a technique where the AI model is trained not just on data, but refined through the expert guidance of human red teamers. In essence, Votal has created an AI that learns from the outcomes of real-world bypasses and successful hacks, enabling it to generate adaptive and increasingly sophisticated attacks.
The company claims this AI attacker can operate with 20 times the throughput of a human red team, systematically probing for weaknesses across a proprietary seven-stage framework it calls the "Agentic AI Kill Chain." This kill chain models a complete attack path, from initial prompt injection and privilege escalation to lateral movement within a network and final actions on an objective. By continuously simulating these multi-stage campaigns, the platform aims to provide CISOs and CIOs with evidence-based assurance that their AI systems are resilient against emerging threats.
While the concept of an "AI Kill Chain" has been explored by others, including NVIDIA, and is conceptually aligned with frameworks like the MITRE ATLAS for AI, Votal's specific model and its integration into a continuous testing platform represent a significant step forward. The RLHF training is a key differentiator, promising a level of adaptability that goes beyond scripted attack simulations.
Forging a United Front with Open Source
Perhaps the most significant part of Votal's announcement is its decision to open-source its comprehensive Attack Catalog. This catalog is a structured library containing over 35 security categories and more than 185 named techniques, complete with various encoding and obfuscation methods used by real-world adversaries.
By making this resource public, Votal AI is inviting the entire security community—from internal enterprise teams and independent researchers to developers at other firms—to inspect, use, and contribute to a shared knowledge base of AI threats. This move stands in stark contrast to the traditionally proprietary nature of threat intelligence. The catalog is aligned with major industry standards, including the OWASP LLM Top 10, NIST AI Risk Management Framework (AI RMF), and MITRE ATLAS, making it a practical tool for compliance and threat modeling.
This open-source approach fosters a collaborative defense model. For example, a financial institution can extend the catalog with attack vectors specific to unauthorized transactions, while a healthcare organization can add techniques related to Protected Health Information (PHI) leakage. These community contributions, once reviewed, can be integrated back into the CART platform, strengthening the collective defense for all users. This strategy mirrors other successful open-source security projects that have accelerated industry-wide resilience by democratizing access to critical tools and information.
The Crowded Battlefield for AI Security
Votal AI is entering a market that is both nascent and fiercely competitive. The surge in AI adoption has triggered a corresponding boom in VC funding for AI security startups, with "agentic AI security" becoming one of the fastest-growing subcategories. A host of companies, including TrojAI, Virtue AI, and Redlock AI, are also developing solutions for AI red teaming and runtime protection, many of which leverage their own forms of agent-led testing.
"As agentic AI becomes critical infrastructure, the security imperative is clear: static or periodic testing is no longer sufficient," said Bobby Gupta and Jyotirmoy Sundi, Votal AI's CEO and CTO, in a joint statement. "By releasing our RLHF-trained attacker model and open-sourcing the Attack Catalog, we're equipping CISOs, VPs of AI, and CIOs with transparent, community-powered tools to build resilient, compliant AI ecosystems from day one."
With the RSA Conference set to begin in San Francisco, Votal AI's dual launch is strategically timed to capture the attention of thousands of security leaders searching for viable solutions to their most pressing AI challenges. The company will be offering live demonstrations, simulating complex attacks against production AI agents, providing a tangible look at the future of a security arms race where the most effective defender may just be a smarter, faster attacker. The discussions and debates sparked by these new capabilities will likely shape security strategies for enterprises for years to come.
