AI Forges Risky Alliance Between Identity and Data Security
- 2026-2029 Forecast: AI-driven convergence of identity and data security will reshape cyber threats, with automation and agentic AI creating new attack vectors.
- Insurance Impact: Cyber insurers are shifting to continuous validation models, demanding real-time identity and data security controls for coverage.
- AI Attack Evolution: Immediate threat is AI-accelerated attacks (reconnaissance, social engineering) rather than fully autonomous cyberattacks.
Experts warn that organizations must adopt unified identity and data security governance to mitigate AI-driven risks, as failures in one domain now directly imperil the other.
AI Forges Risky Alliance Between Identity and Data Security
FRISCO, TX – January 27, 2026 – A new security forecast predicts that the rise of artificial intelligence is fundamentally reshaping the cyber threat landscape by creating an inseparable dependency between identity and data security. The report, released today by the Netwrix Security Research Lab, warns that the next wave of disruption will come from adversaries exploiting this convergence, using AI to scale identity-based attacks to compromise sensitive data.
The multi-year outlook, which covers trends from 2026 through 2029, argues that as organizations increasingly rely on automation and autonomous AI systems, the line between managing who can access data and protecting the data itself has effectively vanished. This shift demands a unified approach to security, as failures in one domain now directly imperil the other.
The New Battleground: Identity and Data Convergence
The central prediction for 2026 is the tightening bond between identity security and data security, driven by two key factors: identity automation and the proliferation of 'agentic AI'. For years, security teams have worked to manage digital identities—the user accounts, service accounts, and permissions that grant access to corporate resources. Now, with the expansion of automated workflows for tasks like employee onboarding or privilege management, these identity systems are not just granting access; they are actively orchestrating it at scale.
According to the Netwrix forecast, this means adversaries are shifting their focus. Rather than solely targeting individual user credentials, they are now attacking the automation itself—exploiting misconfigured workflows, federation trusts, and orchestration platforms to gain broad access. A single flaw in an automated identity process could expose vast repositories of sensitive data.
This risk is amplified by the emergence of agentic AI, which refers to AI systems capable of performing tasks autonomously to achieve specific goals. As these AI agents are deployed to optimize business processes, they require their own identities to access applications, move files, and act on data. This raises critical governance questions: Which identities do these AI agents use? What data can they access? And under whose authority do they operate?
Without robust, unified controls, an autonomous AI agent with excessive permissions could become a powerful tool for data exfiltration, either through malicious compromise or unintentional misconfiguration. The speed and scale at which these agents operate mean that a potential data exposure event could unfold far faster than a human security team could react.
"The threat landscape isn't only expanding because attackers suddenly have better tools," said Dirk Schrader, Vice President of Security Research at Netwrix, in the press release. "It's also expanding because identity security, data security, and automation are becoming inseparable. Organizations that succeed will be the ones that govern identity and data security together and treat automation as something to be continuously validated, not blindly trusted."
Your Cyber Insurance Policy Is Watching
The growing risk posed by AI-driven automation is not going unnoticed by the cyber insurance industry. The Netwrix forecast highlights a significant shift in how insurers assess and price risk, moving away from static, periodic questionnaires toward a model of continuous validation.
As identity failures become a primary driver of data breaches, insurers are expected to demand real-time telemetry demonstrating that robust identity and data security controls are in place and working effectively. This represents a major operational and financial challenge for businesses. Organizations that can provide verifiable proof of strong identity governance—showing who is accessing sensitive data and why, in real time—may benefit from improved policy terms and lower premiums. Conversely, those lacking this visibility will likely face increased scrutiny, higher costs, or even denial of coverage.
This trend is already taking shape. Recent industry reports have shown a marked increase in insurers requiring specific security solutions, such as Privileged Access Management (PAM), as a prerequisite for coverage. This effectively makes the insurance industry a powerful enforcer of security best practices, pushing organizations to adopt a more proactive and evidence-based approach to protecting their digital assets.
AI Attacks: Separating Acceleration from Autonomy
While the prospect of autonomous AI agents turning against their creators makes for compelling science fiction, the Netwrix report provides a more grounded assessment for 2026. The most immediate AI-related threat is not fully autonomous cyberattacks, but rather the acceleration of existing attack techniques.
Operating a fully autonomous attack campaign in a complex enterprise environment remains prohibitively expensive and unpredictable. Factors like noisy signals, environmental variations, and the high cost of infrastructure make such ventures economically unfeasible for most adversaries in the near term. Instead, attackers will leverage AI to enhance and speed up their current methods, including reconnaissance, social engineering, impersonation, and the abuse of access privileges.
For defenders, this means the challenge is not yet fighting rogue AI, but rather building resilience against human adversaries who are now armed with AI-powered tools. The most effective safeguards remain strong foundational security controls: robust identity governance, comprehensive data visibility, and a security posture that denies attackers the permissive access and clean feedback loops that AI-driven automation depends on to succeed.
Looking Ahead: Self-Protecting Data and Vendor Risk
Looking toward 2027 and beyond, the forecast anticipates further convergence. Data itself is expected to become more intelligent, carrying its own encryption, access policies, and provenance—a record of its origin and who has interacted with it. While this concept of 'self-protecting data' holds promise for reducing breach impact, its effectiveness hinges on consistent implementation and strong identity context to be manageable at scale.
However, the report also warns of significant long-term risks that could undermine progress. If economic pressures lead to underinvestment in AI governance and oversight, organizations could be left with a tangled mess of unmanaged AI models and undocumented data dependencies, creating massive compliance and security gaps.
Perhaps one of the most critical risks identified for 2028 and 2029 is the instability of the AI vendor ecosystem. As businesses rush to experiment with a growing number of emerging AI providers, they are entrusting their data—prompts, training sets, and outputs—to third-party platforms. This raises difficult questions about data ownership and control. If an AI vendor is acquired, pivots its business, or fails, enterprises may find it difficult or impossible to retrieve, govern, or even locate their data. This turns early AI experimentation into a persistent data exposure and business continuity risk, underscoring the urgent need for clear data ownership policies and robust governance frameworks from the very start of any AI initiative.
