XM Cyber Tackles AI Attack Paths to Secure Innovation

📊 Key Data
  • 72% of CISOs fear generative AI could lead to a data breach
  • XM Cyber's platform maps AI attack paths across hybrid environments
  • New features address Shadow AI, misconfigurations, and Agentic AI risks
🎯 Expert Consensus

Experts agree that securing AI-driven innovation requires comprehensive visibility and governance to prevent complex, multi-stage attacks.

about 1 month ago
XM Cyber Tackles AI Attack Paths to Secure Innovation

XM Cyber Tackles AI Attack Paths to Secure Innovation

TEL AVIV, Israel – March 17, 2026 – In a move that directly addresses the growing tension between rapid AI adoption and enterprise security, XM Cyber today announced significant enhancements to its Continuous Exposure Management platform. The new capabilities are designed to give organizations the visibility and control needed to secure their expanding AI attack surfaces, aiming to resolve a dilemma that has security leaders on edge.

As businesses integrate artificial intelligence into their operations at an unprecedented pace, they are also inadvertently creating new avenues for cyberattacks. The rush to innovate often outpaces the implementation of necessary security controls, a gap that attackers are eager to exploit. XM Cyber's latest release introduces a suite of tools for discovering, analyzing, and remediating AI-related security exposures, promising to let companies embrace AI without handing attackers a roadmap to their most critical assets.

"Rapid AI adoption has created a dilemma for security leaders: innovate at speed, or maintain the controls needed to stay secure," said Boaz Gorodissky, CTO and Co-Founder of XM Cyber, in the company's announcement. "Our new functionality eliminates this friction by enabling security teams to identify and remediate AI-related exposures before attackers can exploit them."

The Expanding AI Threat Landscape

The urgency for such a solution is underscored by a rapidly evolving threat landscape. The proliferation of AI has introduced a new class of vulnerabilities that traditional security tools, often operating in silos, struggle to manage. Industry reports highlight a significant rise in concern, with one recent survey indicating that 72% of Chief Information Security Officers (CISOs) fear that generative AI solutions could lead to a data breach within their organization.

The risks are multifaceted. The use of unauthorized public AI services by employees, known as Shadow AI, can lead to sensitive company data being fed into models outside of corporate control. Misconfigurations in complex cloud AI environments—such as AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure OpenAI—can inadvertently expose proprietary models or the sensitive data used to train them. Furthermore, the rise of "Agentic AI," autonomous agents that can execute tasks on a network, presents a new high-impact attack path if not properly governed and secured.

These challenges create a complex web of potential exposures. An attacker might exploit a vulnerability in a public-facing web application, pivot to a misconfigured cloud environment, and then leverage an exposed API key found in an AI development project to access and exfiltrate data from a critical on-premises database. It is this chain of seemingly unrelated, minor exposures that can lead to a major security incident.

Bridging the Hybrid Security Gap

XM Cyber's strategy hinges on its ability to visualize and validate these complex attack paths across hybrid environments. The platform's enhancements are built on three core pillars: Comprehensive AI Attack Surface Visibility, Validated AI Attack Path Mapping, and AI Security Governance.

The first pillar involves discovering all AI assets, sanctioned and unsanctioned. This includes identifying employee use of public tools like OpenAI's ChatGPT and Google's Gemini, cataloging internal Model Context Protocol (MCP) servers used for agentic AI, and providing deep visibility into managed cloud AI services.

The platform's crown jewel, however, is its Attack Graph Analysis™. This technology, now extended to AI, models how an attacker could chain together disparate exposures across on-premises, cloud, and AI infrastructure. XM Cyber asserts this gives it a unique advantage, allowing it to validate complete attack paths that cross boundaries invisible to siloed tools. The platform can show precisely how an exposed credential in an AI model's configuration file could be the key that allows an attacker to move from the cloud to a company's internal network.

While the cybersecurity market is crowded with major players like Tenable, Palo Alto Networks, and CrowdStrike all rolling out their own AI Security Posture Management (AI-SPM) tools, XM Cyber is staking its claim on this comprehensive, cross-domain path analysis. Where many tools focus on securing the AI model or the cloud environment in isolation, the company's approach is to contextualize AI risks within the entire organizational attack surface. This is driven by internal research, such as the company's analysis of complex permissions and policies in AWS Bedrock, which informs how the platform identifies potential exploits.

From Security Posture to Regulatory Compliance

Beyond preventing breaches, the new capabilities are also designed to address the growing burden of regulatory compliance. The advent of sweeping legislation like the EU AI Act and influential guidelines such as the NIST AI Risk Management Framework is forcing organizations to adopt a more structured and responsible approach to AI deployment.

These frameworks demand robust governance, risk management, and continuous oversight—areas where automated security platforms can provide significant value. XM Cyber's AI Security Governance and Compliance features directly target these needs. The platform's ability to detect configuration drift, for instance, ensures that AI server definitions do not change in an unauthorized manner, helping to maintain a consistent and compliant security posture over time.

By continuously monitoring AI infrastructure against organizational policies and regulatory requirements, the platform helps transform compliance from a manual, point-in-time audit into an automated, ongoing process. This is critical for organizations operating under frameworks that mandate transparency and risk mitigation throughout the AI lifecycle. The ability to scan for and flag hardcoded API keys or excessive permissions not only reduces risk but also provides demonstrable proof of due diligence to auditors and regulators.

As AI becomes more deeply embedded in business processes, the distinction between AI security and general cybersecurity will continue to blur. The integration of AI exposure management into broader CTEM programs represents a crucial evolution, shifting the industry toward a more proactive, predictive, and unified security strategy. For organizations navigating the promise and peril of the AI revolution, the ability to see and sever potential attack paths before they can be exploited is no longer a luxury, but a fundamental requirement for secure innovation.

Sector: Cloud & Infrastructure AI & Machine Learning
Theme: Artificial Intelligence Generative AI Regulation & Compliance
Product: ChatGPT
Metric: Financial Performance
UAID: 21518