Mindgard Debuts Autonomous Platform to Map AI Attack Surfaces
- $25 billion: Projected market size for AI security by 2035, up from $3.5 billion in 2025
- 80 vulnerabilities: Identified in 90 days across major AI technologies
- First autonomous reconnaissance: Platform designed specifically for AI attack surface mapping
Experts agree that Mindgard's autonomous reconnaissance platform addresses critical gaps in AI security, offering proactive defense against emerging threats in an increasingly vulnerable AI landscape.
Mindgard Unveils Autonomous Platform to Map AI Attack Surfaces
BOSTON, MA – March 17, 2026 – As enterprises race to integrate artificial intelligence, a new front has opened in the cybersecurity war. Boston-based AI security firm Mindgard today announced a major expansion of its platform, introducing what it calls the industry’s first autonomous reconnaissance capability designed specifically to secure AI models, agents, and applications. The new feature aims to give security teams a fighting chance by automatically discovering and mapping the complex and often hidden attack surfaces of their AI deployments.
This move comes at a critical time, as the rapid adoption of generative AI has outpaced the development of specialized security controls. Traditional application security tools have proven inadequate against the novel vulnerabilities introduced by the probabilistic and opaque nature of AI systems. Mindgard's platform is engineered to address this gap, enabling organizations to continuously discover, assess, and defend their AI infrastructure against emerging threats.
Redefining AI Defense with Attacker-Style Reconnaissance
At the heart of Mindgard's announcement is its new Reconnaissance module. Unlike traditional vulnerability scanners that look for known code flaws, this capability operates like an adversarial team, automatically performing intelligence gathering to understand how an AI system truly behaves in a production environment. It probes the system to identify its components, including the guardrails intended to keep it safe, the powerful system prompts that dictate its core behavior, and the various tools, integrations, and external services it can access.
By mapping these elements, the platform reveals the real-world attack surface and uncovers potential “agentic attack paths”—sequences of actions an attacker could manipulate an AI agent into performing. This allows security teams to move beyond theoretical risks and focus on tangible vulnerabilities that have a high impact in a real-world context. The goal is to shift the security paradigm for AI from a reactive, incident-response model to a proactive, preventative posture.
The effectiveness of this approach has been validated by early users. Mindgard's work with Zed Industries on its Antigravity IDE, which integrates AI into the development process, highlighted how traditional trust assumptions can break down. “Mindgard's research resulted in actionable vulnerability submissions that we were able to act on swiftly,” said John Swanson, Head of Security at Zed Industries. “Addressing these vulnerabilities hardened the Zed editor against a class of vulnerabilities common to development tools integrating AI, improving the security posture of Zed and our broader developer community as a whole.”
Navigating a Crowded and Competitive Landscape
The launch enters a market experiencing explosive growth and intense competition. Industry analysts project the AI security market to grow from approximately $3.5 billion in 2025 to over $25 billion by 2035, fueled by the urgent need to counter AI-enhanced cyberattacks. While many established cybersecurity giants like SentinelOne and Darktrace leverage “autonomous AI” for threat hunting and endpoint response, Mindgard’s focus on reconnaissance for the AI model layer itself carves out a specific and critical niche.
The claim of being the “first autonomous” platform in this domain is nuanced. The cybersecurity landscape is rife with automation, and other startups are tackling similar problems. Tenzai, for instance, recently made headlines with its “AI Hacker,” an autonomous system designed for penetration testing that has achieved high rankings in global hacking competitions. However, Mindgard differentiates itself by focusing specifically on the initial reconnaissance phase for the entire AI stack—the models, agents, and applications—rather than general application testing. This specialized focus on mapping the unique terrain of AI systems is where the company stakes its claim.
From Academic Research to Commercial Shield
Mindgard’s credibility is significantly bolstered by its deep academic roots. The company was spun out of Lancaster University, home to one of the world's largest AI security laboratories, and is built upon more than a decade of pioneering research. The firm’s CEO, Dr. Peter Garraghan, is also a Professor in Computer Science at the university, and his lab’s work identified the fundamental shortcomings of applying traditional security methods to AI.
This foundation in offensive security research and PhD-led R&D powers the platform’s core engine: a vast attack library covering thousands of unique AI attack scenarios. This library enables the platform to emulate how sophisticated adversaries scope, plan, and execute attacks against AI systems. By combining rigorous academic research with practical, attacker-aligned testing, Mindgard aims to provide a defense that is not just automated, but also intelligent and deeply informed by the very nature of the threats it is designed to stop. This synergy between scientific discovery and commercial application provides a level of expert-driven validation that is crucial in the high-stakes field of AI security.
The Proving Ground: Vulnerabilities and Future Showcases
To demonstrate its platform's capabilities, Mindgard reports that in the last 90 days alone, it has identified more than 80 publicly reported vulnerabilities across some of the most prominent AI technologies in the world. The company specifically cited its work in uncovering security flaws in xAI’s Grok, OpenAI’s ChatGPT, and the aforementioned Google Antigravity IDE. While the specifics of every vulnerability are not always public, the claim underscores the platform's potential to find and help fix flaws before they can be widely exploited by malicious actors.
Looking ahead, Mindgard plans to engage directly with the security community at the upcoming RSAC 2026 conference. The company will be hosting a throwback-themed booth designed to evoke the 1990s .com era, complete with retro hacker gear and live demonstrations of its platform. A key feature will be an AI-centric capture the flag challenge, inviting security professionals to test their skills against the very kinds of AI systems the company's new platform is built to protect.
