Permiso Unveils SandyClaw to Secure AI's Risky New Supply Chain
- 150 malicious skills identified in community registries, containing hundreds of vulnerabilities
- 97% of enterprise leaders expect a material AI-agent-driven security incident within the next year
- SandyClaw is the first dynamic sandbox designed to analyze and neutralize threats within AI agent skills
Experts agree that the rapid growth of AI agent skills marketplaces has outpaced security measures, making runtime behavior analysis essential to mitigate emerging threats.
Permiso Unveils SandyClaw to Secure AI's Risky New Supply Chain
PALO ALTO, CA – April 02, 2026 – As artificial intelligence agents become increasingly autonomous, the software supply chain that powers them has emerged as a new and fertile ground for cyberattacks. Addressing this critical vulnerability, identity security firm Permiso Security today announced the launch of SandyClaw, a platform it bills as the first dynamic sandbox designed specifically to analyze and neutralize threats within AI agent skills.
AI agents rely on downloadable “skills” to perform useful tasks, learning how to interact with new tools, APIs, and services. These skills, shared through rapidly growing marketplaces, function as the building blocks for agentic AI. However, this open ecosystem also creates a significant security blind spot. Permiso’s new platform executes these skills in a secure, isolated environment to observe their true behavior, moving beyond traditional security methods that attackers can easily evade.
The New AI Supply Chain and Its Hidden Dangers
The promise of agentic AI—systems that can independently reason and act to accomplish complex goals—has spurred a Cambrian explosion of development. Frameworks like OpenClaw, which gained over 145,000 GitHub stars in weeks, highlight the immense demand for autonomous assistants. This ecosystem is powered by skills, modular capabilities often defined in simple markdown files like SKILL.md, which instruct an agent on how to perform a workflow. Marketplaces such as SkillsMP and agentskill.sh now aggregate hundreds of thousands of these skills, creating a de facto software supply chain for AI.
But this speed of innovation has outpaced security. Much like the early days of open-source software libraries, these skill marketplaces have become a target. Research has already uncovered significant threats, with one study identifying over 150 malicious skills containing hundreds of vulnerabilities across community registries. The threat is not theoretical; campaigns like “ClawHavoc” have demonstrated how malicious extensions can be used for data exfiltration and prompt injection attacks.
The potential for damage is enormous. A malicious skill could be designed to covertly read local environment files to steal AWS credentials, crypto wallet keys, or other secrets. It could trick an agent into downloading and executing malware, poison the agent's memory to establish persistence across sessions, or inject hidden commands to exfiltrate sensitive corporate data. The risk is so pronounced that a recent report found 97% of enterprise leaders expect a material AI-agent-driven security incident within the next year.
Beyond Code Scans: Why Behavior is the New Battlefield
Until now, the primary methods for vetting AI skills have been static code analysis or asking another Large Language Model (LLM) for an opinion. Permiso argues these approaches are fundamentally flawed because they never execute the skill, meaning they cannot detect malicious behavior that only manifests at runtime.
“Most skill scanners inspect code or ask an LLM for an opinion. But real risk shows up at runtime: network activity, file writes, and access to sensitive environment variables,” said Ian Ahl, CTO at Permiso Security. “SandyClaw was built on the belief that behavior is more revealing than source code alone. We detonate the skill, capture everything it does, and let the evidence speak for itself.”
SandyClaw’s methodology, known as sandbox detonation, is a proven cybersecurity strategy now adapted for the AI era. It executes each skill in a fully instrumented, isolated environment, recording every action at both the LLM and operating system levels. This includes every network call, domain resolution, file write, and attempt to access environment variables. Crucially, the platform incorporates SSL interception, allowing it to decrypt encrypted outbound traffic to expose data exfiltration attempts that would otherwise be invisible.
The collected behavioral data is then analyzed by a suite of powerful detection engines, including Sigma, Yara, Nova, and Snort, which are augmented with custom rules developed by Permiso's threat research team. This provides security teams with a clear, evidence-backed verdict rather than an ambiguous confidence score, allowing them to see precisely why a skill was flagged as malicious.
Permiso's Strategic Leap from Identity to AI Agency
For Permiso Security, the launch of SandyClaw is a strategic expansion that leverages its deep expertise in identity security. The company, a two-time SC Award winner for its Threat Detection Technology, has built its reputation on its ability to unify and monitor the behavior of all identities—human, non-human, and now AI—across complex enterprise environments.
AI agents, which operate with credentials and permissions, represent a new and powerful class of non-human identity. Permiso's core platform already excels at correlating identity behavior across cloud providers and on-premise systems to surface high-fidelity threats. Extending this capability to the skills that define an AI agent's behavior is a natural and critical progression.
“Agents are only as trustworthy as the skills they run,” said Paul Nguyen, Co-Founder and Co-CEO of Permiso Security. “As skill marketplaces become the primary distribution channel for agent capabilities, the ability to validate what a skill actually does before it reaches your environment becomes a security requirement, not a nice-to-have. That is what SandyClaw delivers.”
This move positions the company at the forefront of securing not just who has access, but what automated systems do with that access. By treating AI agents as first-class identities, Permiso aims to provide the comprehensive visibility needed to manage their associated risks.
A New Standard for Trustworthy AI
With support for major agent frameworks including OpenClaw, Cursor, and Codex, SandyClaw aims to establish a new security baseline for the entire agentic AI ecosystem. The platform’s emphasis on full verdict transparency—providing the complete behavioral record behind every determination—is designed to build trust not only in the tool but in the AI systems it protects.
By giving security teams the ability to verify findings themselves rather than relying on an opaque score, the platform fosters a more robust and defensible security posture. For existing Permiso customers, who receive unrestricted access, SandyClaw can automatically analyze skills the moment the platform detects a download or installation, providing seamless and proactive protection.
As enterprises race to deploy AI agents to gain a competitive edge, the security controls governing them must evolve just as quickly. Tools that can look beyond static code to understand the true intent and behavior of AI capabilities at runtime will be essential for navigating the opportunities and risks of this new technological frontier.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →