SlashID Tackles Enterprise AI Identity Crisis with New Platform

📊 Key Data
  • 1 in 4 employees use unapproved AI technology at work, creating security blind spots.
  • Over 150,000 non-human AI identities could exist per large enterprise by 2028 (Gartner prediction).
  • April 2026 Vercel incident highlighted risks of unmanaged AI tool access via OAuth 2.0.
🎯 Expert Consensus

Experts agree that managing non-human AI identities is critical for enterprise security, requiring identity-centric governance to mitigate risks like data breaches and compliance violations.

6 days ago
SlashID Tackles Enterprise AI Identity Crisis with New Platform

SlashID Tackles Enterprise AI Identity Crisis with New Platform

NEW YORK, NY – May 05, 2026 – As enterprises race to adopt artificial intelligence, a new and largely invisible security threat is emerging from within: the proliferation of non-human AI identities with unmanaged access to sensitive corporate data. Addressing this growing challenge, identity security firm SlashID today announced the launch of its AI Identity Governance platform, a solution designed to extend traditional security controls to the world of AI applications, autonomous agents, and cloud-based models.

The new offering aims to solve the problem of “Shadow AI”—the unsanctioned use of AI tools by employees—by providing companies with visibility and control over the myriad of AI-driven identities now operating within their digital ecosystems. This comes at a critical time, as security leaders grapple with how to manage the risks posed by tools that can be integrated into workflows with a single click, often inheriting broad permissions to access corporate resources.

The Governance Gap: Shadow AI and the Vercel Incident

The urgency of this problem was starkly illustrated by the April 2026 Vercel security incident, which SlashID cites as a key driver for its new platform. In that breach, attackers compromised an employee’s Google Workspace account via a malicious OAuth 2.0 application connected to a third-party AI tool. This type of attack highlights a critical flaw in existing security postures: traditional governance platforms, built for predictable software lifecycles, are ill-equipped to handle the speed and scale of AI adoption.

Research indicates the problem is widespread. A 2025 report found that roughly one in four employees use unapproved AI technology at work, creating a massive blind spot for IT and security teams. Each time an employee authorizes an AI assistant or connects a new tool, they effectively create a new non-human identity. These identities, which Gartner predicts could number over 150,000 per large enterprise by 2028, often operate with broad, poorly understood permissions, creating a sprawling and unmanaged attack surface.

“AI governance is fundamentally about identity and entitlements,” said Vincenzo Iozzo, SlashID’s Co-Founder, in the company's announcement. “Every time an employee authorizes a new AI assistant... they are effectively creating a new non-human identity with access to corporate resources. Security teams need the same visibility, policy enforcement, and lifecycle controls for those identities that they already have for users and service accounts.”

An Identity-Centric Approach with the Access Graph

SlashID’s answer is to place identity at the core of AI security. The new platform is built on the company's existing Access Graph, a technology that maps the complex relationships between all identities (human and non-human) and the resources they can access. By extending this graph to AI, the platform provides three core capabilities.

First is Unified Visibility. The solution continuously discovers OAuth 2.0 grants given to AI applications, usage of shadow AI tools via a browser extension, and connections to AI models on platforms like Amazon Bedrock and Azure OpenAI. Crucially, the Access Graph models OAuth scopes—the specific permissions granted—as first-class relationships. This allows security teams to see not just that a user connected an AI app, but precisely which mailboxes, files, and repositories that app can now reach.

Second, it enables Policy-Based Access Control. Administrators can create rules to restrict or block access to specific AI applications or model providers based on user roles or other attributes. For example, a policy could prevent employees in the finance department from authorizing consumer-grade AI tools to access sensitive financial data, with enforcement automated throughout the employee lifecycle.

Finally, the platform offers Continuous Segregation-of-Duties (SoD) Enforcement. Security teams can define and monitor for “toxic combinations” of access, such as an identity having access to regulated customer data while also holding active permissions to an external Large Language Model (LLM). These checks can automatically trigger remediation workflows, like revoking access or creating a security ticket, without manual intervention.

Navigating a Crowded AI Security Market

SlashID is entering a competitive and rapidly evolving market. Many organizations are already investing in a patchwork of point solutions for AI security, including Data Loss Prevention (DLP) proxies to monitor data flows, prompt firewalls to prevent malicious inputs, and Cloud Access Security Broker (CASB) tools to discover shadow AI usage. However, SlashID argues that these tools often operate in isolation, generating alerts without the identity context needed to prioritize and remediate risks effectively.

The company’s identity-centric, graph-based philosophy is not entirely unique. Competitors like Veza have also been championing an access graph approach for managing machine and AI identities, and other startups like Astrix Security and Relyance AI are tackling agentic AI security with an identity-first mindset. The key differentiator SlashID emphasizes is its native integration into a broader identity governance platform that manages human and non-human access with the same set of controls, without requiring inline proxies or additional agents on user devices.

By operating at the identity fabric layer, the solution aims to provide a more foundational and less intrusive form of governance than tools that must inspect traffic or content. This integrated approach promises to give security teams a single, unified view to answer the fundamental question: who and what has access to our data, and how did they get it?

Enabling Innovation Through Secure Adoption

Ultimately, the goal of such platforms is not to block AI, but to enable its safe and rapid adoption. By providing robust governance and guardrails, companies can empower their employees to leverage powerful new AI tools without exposing the organization to unacceptable risks of data breaches or compliance violations under regulations like SOC 2, ISO 27001, and HIPAA.

The solution is available immediately to existing SlashID customers as part of its Identity Governance and Administration product, signaling a strategic move to position identity security as the central pillar of an effective AI security strategy. As AI continues to integrate itself into every facet of business, the ability to see, manage, and secure these new non-human identities will become less of a niche capability and more of a fundamental requirement for enterprise security.

Sector: Cybersecurity AI & Machine Learning Cloud & Infrastructure Fintech
Theme: Artificial Intelligence Generative AI Data Privacy (GDPR/CCPA) Cloud Migration
Event: Product Launch
Product: AI & Software Platforms Cryptocurrency & Digital Assets
Metric: Revenue Net Income

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 29542