The AI Exposure Gap: Report Reveals a Silent Cybersecurity Crisis
- 70% of organizations have integrated at least one third-party AI or Model Context Protocol (MCP) package, often without security oversight.
- 86% of organizations host third-party code packages with critical-severity vulnerabilities.
- 18% of organizations have granted AI services full administrative permissions, creating high-risk access points.
Experts agree that the rapid adoption of AI is outpacing cybersecurity measures, creating systemic vulnerabilities that demand a shift from reactive patching to proactive exposure management.
The AI Exposure Gap: Report Reveals a Silent Cybersecurity Crisis
COLUMBIA, MD – February 19, 2026 – A groundbreaking report released today by cybersecurity firm Tenable reveals that the rapid, widespread adoption of artificial intelligence is creating a critical “AI exposure gap,” leaving organizations vulnerable to a new wave of invisible and unmanaged cyber risks. The research indicates that the velocity of AI-driven development and cloud integration is far outpacing the ability of security teams to assess and neutralize threats, leading to a precarious situation where companies inherit cyber risks faster than they can address them.
The ‘Cloud and AI Security Risk Report 2026’ from Tenable (NASDAQ: TENB) paints a stark picture of a new security paradigm. It details how the breakneck pace of innovation, fueled by third-party AI models and vast cloud infrastructures, has created a zero-margin environment for error. This exposure gap manifests across applications, infrastructure, and, most critically, digital identities, creating systemic vulnerabilities that most organizations are not equipped to manage.
“AI systems embedded in infrastructure pose a critical risk that CISOs and defenders must address, in addition to anticipating emerging threats from both AI and cloud technologies,” said Liat Hayun, Senior Vice President of Product Management and Research at Tenable, in the company's announcement. “Lack of visibility and governance means teams are at the mercy of new exposures, including over-privileged identities in the cloud.”
The Anatomy of a Hidden Threat
Tenable's research, which analyzed anonymized data from thousands of cloud environments, quantifies the silent creep of AI-related risk. A staggering 70% of organizations have already integrated at least one third-party AI or Model Context Protocol (MCP) package into their systems. These integrations often occur deep within applications and infrastructure, frequently without the central oversight of security teams, effectively creating a “shadow AI” ecosystem rife with potential vulnerabilities.
This dependency on external code creates a fragile and dangerous supply chain. The report found that 86% of organizations are hosting third-party code packages that contain critical-severity vulnerabilities. This makes the software supply chain a primary and persistent source of exposure. Even more alarmingly, nearly one in eight organizations (13%) have deployed packages with a known history of compromise, such as the notorious s1ngularity or Shai-Hulud worms, effectively inviting well-documented threats into their environments.
AI's Achilles' Heel: Identity and the Supply Chain
The report dives deeper into two specific areas that have become the primary vectors for exploitation in the AI era: the software supply chain and identity management. The lines between an organization's own code and third-party code have blurred, making it difficult to ascertain where risk truly lies. This is compounded by the nature of AI itself, which relies on complex, often open-source, libraries and models that are not always built with security as a first principle.
However, the most profound shift in the risk landscape comes from identity. The report highlights that non-human identities—such as AI agents and automated service accounts—now represent a higher risk (52%) than their human counterparts (37%). These digital agents are often granted broad permissions to function, creating what Tenable calls “toxic combinations” of access that fragmented security tools fail to connect or comprehend.
This is borne out by the finding that 18% of organizations have granted AI services full administrative permissions. These powerful accounts are rarely audited, creating a pre-packaged catalog of privileges that, if compromised, would give an attacker the keys to the kingdom. This trend is corroborated by real-world breaches where attackers have specifically targeted service accounts and other non-human identities as a path of least resistance into corporate networks.
A Widening Chasm of 'Ghost' Credentials
Perhaps one of the most tangible risks identified in the report is the proliferation of “ghost” secrets and dormant accounts. An astonishing 65% of organizations possess unused or unrotated cloud credentials. While some digital clutter is expected, the report reveals that 17% of these ghost secrets are tied directly to critical administrative privileges, representing unlocked doors to the most sensitive parts of the infrastructure.
Compounding this issue is the prevalence of dormant yet powerful identities. According to the findings, nearly half (49%) of all digital identities holding critical-severity excessive permissions are completely dormant. These accounts, often left over from former employees, test projects, or decommissioned applications, are a goldmine for attackers. They lack oversight, their security hygiene is typically poor, and any malicious activity originating from them is less likely to be noticed. This pattern mirrors recent major breaches where attackers gained their initial foothold by compromising a single dormant test account that lacked modern security controls like multi-factor authentication.
Beyond Patching: The Shift to Exposure Management
The report argues that these interconnected, fast-moving threats demand a fundamental shift in cybersecurity strategy—away from reactively patching individual vulnerabilities and toward proactively managing the entire landscape of potential exposure. This holistic approach, termed “exposure management,” involves identifying, evaluating, and prioritizing all possible entry points an attacker could exploit, from software flaws and cloud misconfigurations to the complex web of human and non-human identity risks.
The findings from Tenable align with a broader industry consensus that new frameworks are needed to govern the deployment of AI. Guidelines like the NIST AI Risk Management Framework (AI RMF) are emerging to help organizations map and measure risks associated with artificial intelligence. The core principle is to build a unified view of the attack surface, allowing security teams to understand the context of a vulnerability, prioritize threats based on business impact, and stop managing an endless list of security debt.
By focusing on the unified exposure path, organizations can move from a state of constant reaction to one of strategic defense. This involves enforcing the principle of least privilege for AI roles, neutralizing the risk from dormant “ghost” identities, and gaining comprehensive visibility across the entire software supply chain. In an era where AI is both a powerful business enabler and a significant risk amplifier, understanding and managing this new exposure gap is no longer just a best practice—it is an urgent business imperative.
