Symmetry AIGuard Tackles the Unseen Risks of Enterprise AI

πŸ“Š Key Data
  • 68% surge in shadow Generative AI usage within enterprises (2025 study)
  • 1 in 4 CISOs reported experiencing an AI-generated attack in the past year
  • 70% of enterprises have AI agents in production, but only 29% feel prepared to secure them (2026 Cisco report)
🎯 Expert Consensus

Experts agree that Symmetry AIGuard addresses critical gaps in enterprise AI security by providing a unified governance platform that combines AI inventory, identity management, data access intelligence, and compliance monitoring, making it essential for securing the rapidly evolving AI landscape.

about 2 months ago
Symmetry AIGuard Tackles the Unseen Risks of Enterprise AI

Symmetry AIGuard Tackles the Unseen Risks of Enterprise AI

SAN MATEO, CA – February 25, 2026 – As corporations race to integrate artificial intelligence into every facet of their operations, a new and chaotic digital frontier is emergingβ€”one rife with unseen risks, rogue AI agents, and vast, unsecured data access. Today, Data+AI security firm Symmetry Systems launched Symmetry AIGuard, a platform designed to impose order on this new wild west, providing what it calls a unified command center for the entire enterprise AI ecosystem.

The launch comes at a critical juncture for businesses globally. The pressure to innovate with AI is immense, yet the tools to govern it have lagged dangerously behind. "Organizations are deploying AI faster than they can secure it," said Dr. Mohit Tiwari, CEO of Symmetry Systems, in a statement. This gap has created a landscape where autonomous AI agents can operate with minimal oversight and employees routinely feed sensitive corporate data into unsanctioned public LLMsβ€”a phenomenon known as 'shadow AI.'

Recent industry reports paint a stark picture of this reality. One 2025 study revealed a staggering 68% surge in shadow Generative AI usage within enterprises, with nearly half of all employees admitting to inputting sensitive data into free-tier tools using personal accounts. The consequences are tangible, with IT leaders reporting data leakage and intellectual property exposure as direct outcomes. For CISOs, the threat is escalating; one in four reported experiencing an AI-generated attack in the past year, making the security of AI agents a top concern for 2026.

A Unified Approach to a Fractured Problem

Symmetry AIGuard aims to consolidate AI security by focusing on four critical pillars: external Large Language Models (LLMs), internal AI services, corporate copilots, and the burgeoning world of agentic AI. The goal is to provide a single source of truth for executives across technology, legal, data, and security teams.

"The question isn't just 'what AI do we have?' - it's 'what can that AI access, and should it?'" noted Mustapha Kebbeh, Chief Security Officer at UKG, highlighting the platform's comprehensive vision. "Symmetry is bringing together AI inventory, identity governance, data access intelligence, and compliance monitoring in a single product... No other solution currently connects all four."

This integrated strategy directly confronts the multifaceted nature of AI risk:

  • External LLM Governance: AIGuard monitors both sanctioned and unsanctioned use of public AI like ChatGPT and Gemini. Through proxy integrations, it identifies which models are in use, who is using them, and critically, what data is being shared in prompts, flagging policy violations in real-time.

  • Corporate Copilot Security: Building on its established capabilities for Microsoft Copilot, the platform extends governance to other enterprise copilots. It provides dashboards to track licensing, user activity, and the copilot's potential access to overexposed data, allowing for direct remediation.

  • Internal AI Services Security: As companies build their own predictive and generative AI models, AIGuard inventories these internal services, tracking ownership, data access, and potential regulatory issues like data residency violations.

  • Agentic AI Governance: Perhaps most significantly, the platform addresses the next frontier of enterprise risk: autonomous AI agents.

Governing the New Autonomous Workforce

"Agentic AI is the next frontier of enterprise risk," warned Dr. Tiwari. "These agents act autonomously, access sensitive systems, and make decisions. Yet most organizations have zero visibility into them." Research supports this urgency, with nearly 70% of enterprises reportedly having AI agents in production, even as a 2026 Cisco report found only 29% felt prepared to secure them.

Symmetry AIGuard approaches this challenge by treating agent identities as first-class security principals, with the same rigor applied to human employees. The platform creates a complete inventory of every agent, classifying its type and registering its creator and purpose. For each autonomous agent, it maps out its permissions, the scope of data it can access, and its potential "blast radius" should it be compromised or act maliciously.

A built-in sanctioning workflow ensures no agent operates without formal authorization. The system automatically surfaces high-risk agents, such as those with excessive permissions, access to sensitive data, destructive capabilities like deleting files, or those that have become 'orphaned' with no identifiable owner.

Beyond Monitoring: Merging Identity with Data Context

What differentiates Symmetry's approach from a crowded field of emerging AI security tools is its foundation. AIGuard is built upon the same data and identity graph that powers the company's core DataGuard platform. This underlying engine, born from DARPA-funded research, leverages over 400 sensitive data identifiers and 500 semantic data types.

This data-centric design means AIGuard doesn't just see that an AI agent accessed a file; it understands that the agent, which has permissions to act autonomously, just accessed a file containing classified financial projections. By uniquely merging the context of the identity (human or AI) with the context of the data, the platform provides a far deeper and more actionable view of risk than solutions that only monitor AI activity or data posture in isolation.

This capability is becoming essential as organizations navigate an increasingly complex regulatory minefield. Frameworks like the EU AI Act and the NIST AI Risk Management Framework demand unprecedented levels of transparency, accountability, and risk management. With data privacy laws like GDPR and CCPA imposing heavy fines for data leakage, the ability to prove that AI systems are not mishandling personal information is no longer optional. A unified governance platform that can track an AI's lineage, permissions, and data interactions provides the auditable trail necessary for compliance.

By offering a comprehensive solution, Symmetry Systems is betting that the era of prohibiting AI use is over. The new imperative is secure enablement. With AIGuard now available in preview to existing customers, the market will soon decide if this unified approach is the key to finally letting enterprises innovate with AI confidently, without leaving the back door wide open.

Theme: Regulation & Compliance Agentic AI Data Breaches Generative AI Ransomware
Sector: AI & Machine Learning Financial Services Software & SaaS
Product: ChatGPT Gemini
Metric: Revenue
Event: Private Placement
UAID: 18118