AI's Silent Threat: Unmanaged Identities a Ticking Time Bomb for Firms
- 79% of organizations have low to moderate confidence in preventing cyberattacks exploiting non-human identities (NHIs).
- NHIs outnumber human employees by 20 to 1 (up to 92 to 1 in cloud-native environments).
- Only 14% of organizations have fully automated the creation and removal of AI-related identities.
Experts emphasize the urgent need for automated, policy-driven governance of non-human identities to mitigate escalating cybersecurity risks in AI-driven environments.
AI's Silent Threat: Unmanaged Identities a Ticking Time Bomb for Firms
SEATTLE, WA – January 27, 2026 – A startling new report reveals a massive blind spot in corporate cybersecurity, as the vast majority of IT professionals feel dangerously unprepared to manage the security risks posed by artificial intelligence. A survey released today by the Cloud Security Alliance (CSA) and identity security platform Oasis Security found that 79% of organizations have low to moderate confidence in their ability to prevent cyberattacks that exploit non-human identities (NHIs)—the digital keys held by applications, AI agents, and automated services.
The report, titled The State of Non-Human Identity and AI Security, surveyed 383 IT and security professionals and paints a grim picture of systemic failure. As companies rush to integrate AI into every facet of their operations, they are simultaneously creating an exponentially growing and largely ungoverned army of digital identities, leaving the door wide open for catastrophic breaches.
The Proliferation of Invisible Workers
Non-human identities are the invisible workforce of the digital age. They are the API keys, service accounts, and tokens that allow applications to communicate, data to flow between systems, and AI agents to autonomously perform complex tasks. While essential for automation and innovation, their numbers are exploding. Industry experts estimate that NHIs already outnumber human employees by a factor of 20 to 1, with some projections as high as 92 to 1 in cloud-native environments.
Despite their ubiquity and power, these identities operate in the shadows. The survey highlights a critical governance vacuum: a staggering 78% of organizations lack documented and formally adopted policies for creating or removing AI-related identities. Compounding the issue, 51% report that no single person or team has clear ownership or accountability for these powerful credentials. This lack of oversight has led to a state of “shadow AI,” where automated agents operate with broad, often excessive permissions that are neither monitored nor managed.
“Organizations with limited visibility and unclear ownership are feeling the strain of AI-driven identities and securing identities in the AI era,” said Hillary Baron, AVP of Research at the Cloud Security Alliance. “Establishing strong identity foundations now is critical to reducing risk and confidently scaling AI use.”
Legacy Systems Buckle Under AI's Pressure
The root of the problem lies in a fundamental mismatch between technology and threat. Traditional Identity and Access Management (IAM) solutions were designed to manage human users, who operate on predictable schedules and follow established workflows. These legacy systems are simply not built for the scale, speed, and autonomy of AI and machine identities.
The survey data confirms this technological disconnect, with an overwhelming 92% of respondents stating they are not confident that their legacy IAM tools can effectively manage the risks associated with AI and NHIs. This lack of confidence is well-founded, as organizations continue to rely on dangerously slow and manual processes for a high-velocity problem.
Even when policies exist, their execution is alarmingly inefficient. Only 14% of organizations have fully automated the creation and removal of AI-related identities, while 27% still handle these critical security tasks entirely by hand. This reliance on manual intervention not only stifles innovation but also creates an enormous window of opportunity for attackers. Nearly a quarter (24%) of organizations admitted it takes them more than 24 hours to rotate or revoke a credential after a potential exposure, giving malicious actors ample time to infiltrate systems and exfiltrate sensitive data.
A Governance Vacuum and the Automation Imperative
This operational friction is a direct symptom of a deeper strategic failure. For years, security teams have focused on human error, but the new frontier of risk lies in ungoverned machine access. The rapid, often unsanctioned adoption of generative AI tools has only intensified the problem, with 75% of organizations reporting the discovery of unauthorized AI tools running in their environments, many with embedded credentials granting them deep system access.
“AI turns identity into a high-velocity system,” stated Danny Brickman, CEO and Co-Founder of Oasis Security. “Every new agent, workflow, or integration can mint credentials and permissions in minutes. Too many organizations still govern that with spreadsheets and unsophisticated processes. That’s not an AI strategy–that’s an incident backlog.”
The path forward, according to experts, is a radical shift toward automated, policy-driven governance. The manual, ticket-based approach is obsolete in an era where machines create other machines. Brickman insists the solution is straightforward in principle, though challenging in practice. “The fix is simple,” he continued. “Assign clear ownership, lock policy in writing, and automate the lifecycle before machine access scales beyond control.”
Charting a Path Through the Agentic Age
The findings from the CSA and Oasis Security report are not an isolated warning but a reflection of a growing consensus among industry analysts. Research from firms like Forrester has repeatedly highlighted the “explosive growth” of non-human identities as a primary driver of an expanding corporate attack surface. The World Economic Forum has also flagged the rise of agentic AI—autonomous agents that can act on a user's behalf—as a critical cybersecurity challenge that requires immediate attention.
In response, a new segment of the cybersecurity market is emerging, focused exclusively on NHI and AI identity security, with market projections suggesting it could become a $10 billion industry by 2026. Companies are developing platforms that provide unified visibility, automated lifecycle management, and Zero Trust controls specifically for these machine identities. Frameworks like the Agentic Access Management (AAM) model are being introduced to provide CISOs with a structured approach to governing this new ecosystem.
The challenge for enterprises is no longer if they should adopt AI, but how to do so without compromising their entire security posture. The silent, invisible work of non-human identities has become the new battleground for cybersecurity. Ultimately, the race to harness the power of artificial intelligence will be won not by the fastest innovators, but by the most secure, and that security begins and ends with mastering the identities that now run the enterprise.
