Alice Unveils Tool to Tame AI's 'Wild West' After Malware Discovery

📊 Key Data
  • 6,000 users affected by malicious AI 'skills' on OpenClaw
  • 10% of 2,800 skills audited by Koi Security found to be malicious
  • 145,000 stars on GitHub for OpenClaw, indicating rapid adoption
🎯 Expert Consensus

Experts emphasize that while AI agents offer powerful automation capabilities, their unsecured nature poses significant risks, necessitating tools like Alice's Caterpillar to ensure safety and prevent systemic vulnerabilities.

2 months ago
Alice Unveils Tool to Tame AI's 'Wild West' After Malware Discovery

Alice Unveils Tool to Tame AI's 'Wild West' After Malware Discovery

NEW YORK, NY – February 04, 2026 – By Susan Powell

Trust and safety firm Alice today launched Caterpillar, a free, open-source security scanner aimed at policing the burgeoning world of autonomous AI agents. The release is not merely a proactive measure but a direct response to a stark warning shot: the discovery of malicious AI 'skills' on the popular OpenClaw platform that had already been installed by over 6,000 users.

The incident casts a harsh light on the rapidly expanding, yet dangerously unsecured, frontier of AI agents—software designed to operate with increasing autonomy. As these agents move from experiments to operational tools that can browse the web, access files, and execute commands, they create a new and formidable attack surface. Alice's new tool aims to give developers and users a fighting chance to see what dangers lurk beneath the surface.

The OpenClaw Gold Rush

To understand the threat, one must first understand OpenClaw. Formerly known as ClawdBot and Moltbot, OpenClaw is an open-source framework that has exploded in popularity, amassing over 145,000 stars on GitHub in a remarkably short period. It allows users to run a personal AI assistant on their own machine, granting it 'hands' to interact with the digital world—managing emails, automating browser tasks, and integrating with messaging apps like Slack and WhatsApp.

Its appeal lies in its power and accessibility. With a one-click installer, even non-technical users can deploy a sophisticated AI agent. A thriving community has sprung up around it, contributing thousands of 'skills'—extensions that grant the AI new capabilities, from managing smart home devices to trading cryptocurrency. These skills are often shared on marketplaces like ClawHub, creating a vibrant but unregulated ecosystem.

However, security experts have been watching this explosive growth with growing alarm. The very architecture that makes OpenClaw so powerful also makes it, in the words of one security research team, an "absolute nightmare." By design, the agent is granted broad access to the user's computer, including the ability to run shell commands and read and write files. This creates a high-stakes environment where a single malicious skill can lead to a total system compromise.

A Playground for Malware

Alice's discovery of malicious skills affecting thousands was not an isolated event. It was a symptom of a deeper, systemic vulnerability. Independent audits have confirmed the scale of the problem. One security firm, Koi Security, recently audited over 2,800 skills on the ClawHub repository and found that over 10% were malicious. The majority were part of a coordinated campaign dubbed "ClawHavoc."

Attackers cleverly disguised these malicious skills as popular tools, using typosquatting and deceptive descriptions to lure users. Once a user installed a compromised skill, they were prompted to install a "required prerequisite." This prerequisite was, in fact, the Atomic Stealer (AMOS) malware, a potent information-stealer targeting credentials on both macOS and Windows systems.

The supply chain attack is just one of several critical vulnerabilities plaguing the ecosystem. Researchers have pointed to a "lethal trifecta" of risks inherent in AI agents: access to private data, exposure to untrusted content from the internet, and the ability to execute actions. OpenClaw's design, which has been criticized for leaking API keys in plaintext and has required patches for remote code execution flaws, combines these risks in a perilous way. When you install a skill, you are not just adding a feature; you are installing a behavior with potentially far-reaching permissions.

"Agent ecosystems are scaling faster than the security assumptions around them," said Noam Schwartz, CEO of Alice, in the announcement. "When you install a skill, you're not installing a feature, you're installing behavior. Caterpillar helps teams see what they're actually running, and catch issues early, before they become incidents."

An Open-Source Defense

In response to this emerging threat, Alice's Caterpillar provides a new line of defense. The tool is a static analyzer, meaning it inspects the code and configuration of an AI skill before it is run. It is designed to flag suspicious patterns, such as injection paths that could allow an attacker to hijack the agent, unsafe requests for tool access, and other obfuscated behaviors indicative of malicious intent.

Crucially, Caterpillar's intelligence is powered by RabbitHole, Alice's proprietary adversarial intelligence database, which is built on nearly a decade of threat research. This allows the tool to recognize tactics and techniques used by real-world attackers targeting AI systems.

By releasing Caterpillar as a free and open-source project, Alice is betting on the power of community-led defense. The move enables developers and security teams worldwide to audit, extend, and integrate the tool into their own workflows. This collaborative approach is essential for securing a complex, decentralized, and rapidly evolving ecosystem like OpenClaw. It aims to build a collective immunity, where security knowledge is shared and implemented across the community, raising the bar for attackers.

For an industry captivated by the promise of autonomous agents, the ClawHavoc incident and the subsequent release of Caterpillar serve as a critical reality check. The power of AI agents to automate and assist is immense, but it is directly proportional to the risk they pose if left unsecured. As developers and enterprises rush to deploy these new capabilities, the focus must shift from simply what these agents can do to what they should be allowed to do. Tools that provide visibility and control are no longer optional but essential for ensuring the future of AI is both innovative and safe.

Product: AI & Software Platforms
Sector: AI & Machine Learning Cybersecurity Software & SaaS
Theme: Agentic AI Data Breaches Threat Landscape
Event: Compliance Action Product Launch
UAID: 14180