OWASP Expands AI Security Playbook for Autonomous Agentic Systems
- 8 new sponsors added to the OWASP GenAI Security Project, reflecting growing industry support.
- 5 acquisitions of former project contributors by major cybersecurity firms, validating the project's influence.
- New OWASP Top 10 for Agentic Applications 2026 identifies critical vulnerabilities like Agent Behavior Hijacking and Tool Misuse.
Experts agree that OWASP's expanded AI security frameworks provide critical, actionable guidance for securing autonomous agentic systems, addressing unique risks that traditional standards cannot yet cover.
OWASP Expands AI Security Playbook for Autonomous Agentic Systems
WILMINGTON, Del. – March 19, 2026 – The Open Worldwide Application Security Project (OWASP) GenAI Security Project today announced a significant expansion of its AI security frameworks, releasing a suite of new guides designed to help organizations navigate the complex risks of generative and autonomous AI systems. The announcement, which comes just ahead of the RSA Conference 2026, is bolstered by growing industry support, including eight new sponsors and a series of high-profile acquisitions involving former project contributors, underscoring the project's foundational role in the rapidly maturing AI security market.
As enterprises move beyond simple chatbots to deploying sophisticated AI agents that can act autonomously, the open-source community is racing to provide the necessary guardrails. The new resources from the OWASP GenAI Security Project represent one of the most comprehensive efforts to date to standardize security practices for this new technological frontier.
Building the Playbook for AI Security
At the heart of the release are the Q2 2026 Updated Landscape Guides for LLM and Agentic Security. These documents expand on the project's widely referenced AI Security Solutions Landscape, which maps the entire lifecycle of a generative AI application—from development and testing to deployment and governance. The updated guide introduces a new agentic red teaming taxonomy, providing a structured framework for organizations to identify, measure, and mitigate AI risks through adversarial testing and continuous feedback loops.
Complementing this is the new GenAI Data Security Risks and Mitigations guide for 2026. This resource provides foundational guidance with a sharp focus on the data layer, which is critical for securing AI systems. It addresses risks across the entire data pipeline, from training datasets and fine-tuning inputs to user prompts and model outputs, offering practical strategies for mitigation.
These guides join a growing library of openly licensed resources that are seeing rapid industry adoption. Other notable tools and frameworks include:
OWASP SBOM/AIBOM Generator: An open-source tool that enhances AI supply chain transparency by generating AI Bills of Materials (AIBOMs). By creating a detailed inventory of an AI model's components, data sources, and dependencies in a standard format like CycloneDX, this tool helps organizations manage risk, ensure compliance, and respond to incidents.
Guide for Secure MCP Server Development: This guide provides actionable guidance for securing Model Context Protocol (MCP) servers, the critical—and often vulnerable—connection points that allow AI assistants to interact with external tools, APIs, and data. The guide addresses unique threats like Tool Poisoning and the "Confused Deputy" problem, where an agent with legitimate permissions is tricked into performing malicious actions.
Agentic AI: Securing the New Autonomous Frontier
The most significant development is arguably the new OWASP Top 10 for Agentic Applications for 2026. While the original Top 10 for Large Language Models (LLMs) focused on risks like prompt injection and insecure output handling, this new framework addresses the unique and heightened risks posed by AI systems that can make decisions and take actions on their own. Industry experts have noted that most agentic AI failures stem from control issues and uncontrolled access to data, rather than flaws in the underlying models themselves.
The new Top 10 for agentic systems identifies critical vulnerabilities such as Agent Behavior Hijacking, Tool Misuse and Exploitation, and Identity and Privilege Abuse. These risks move beyond simple data leakage to scenarios where a compromised AI agent could execute unauthorized financial transactions, delete critical data, or exploit other systems it is connected to. The framework provides a crucial starting point for developers and security teams building and deploying these advanced systems.
Steve Wilson, Chief AI Officer at Exabeam and a co-chair and co-founder of the project, highlighted the urgency of this work. "Since the 2023 launch of the OWASP Top 10 for Large Language Models, we've witnessed rapid acceleration in AI technology, from chatbots to agents to fully autonomous digital workers," he said. "Our ability to move faster than traditional standards bodies enables us to deliver timely, practical guidance that helps organizations deploy these technologies securely and responsibly."
Market Validation: Sponsorships and Strategic Acquisitions
The growing importance of OWASP's work is reflected in its expanding industry support. The project welcomed eight new sponsors: Apiiro, Capsule, F5, Fujitsu, NeuralTrust, Starseer, Straiker, and Tellus Digital. This diverse group, ranging from application security platform providers like F5 and Apiiro to global technology giants like Fujitsu, signals a broad industry consensus on the need for standardized AI security practices.
Even more telling is the trend of acquisitions involving former project sponsors. In a powerful validation of the project's influence, several alumni have been acquired by cybersecurity's biggest players: SPLX by Zscaler, Pangea by CrowdStrike, Calypso AI by F5, Lakera by Check Point, and Prompt Security by SentinelOne. These acquisitions demonstrate that the frameworks and expertise cultivated within the OWASP community have become highly valuable strategic assets, shaping the commercial AI security market and influencing the M&A landscape.
From Open-Source Code to Industry Standard
While formal standards bodies like NIST with its AI Risk Management Framework (AI RMF) and ISO with its ISO/IEC 42001 standard provide essential high-level governance structures, the OWASP GenAI Security Project fills a different but equally critical role. Its community-driven, open-source model allows it to operate with agility, producing practical, technical, and actionable guidance that can keep pace with the rapid evolution of AI threats.
Organizations often use the high-level principles from NIST and ISO to define what they need to do for AI governance, and then turn to OWASP's detailed guides and Top 10 lists to determine how to implement the technical security controls. This complementary relationship has positioned the project as a de facto source for on-the-ground implementation guidance.
"AI and agentic systems are no longer emerging technology. They are production reality, and the security community is still racing to catch up," said Scott Clinton, co-chair and co-founder of the project. "The resources we're releasing ahead of RSA represent our most comprehensive view yet of what organizations need to build and deploy AI safely. We look forward to bringing those conversations to San Francisco."
To that end, the project will have a formidable presence at the upcoming RSA Conference 2026 in San Francisco, hosting a kickoff party, a dedicated summit for practitioners and CISOs, a hands-on agentic security workshop and hackathon, and a networking party. These events are designed to foster the collaboration and knowledge-sharing needed to secure the next generation of artificial intelligence.
