The Invisible AI Jungle: Enterprises Blind to Rampant Security Risks
- 44% visibility: Security teams only have visibility into 44% of AI applications built by business users.
- 4:1 ratio: Business builders outnumber professional software developers by 4:1 in many organizations.
- 90% of CISOs: 90% of CISOs plan to implement formal governance policies for citizen development by the end of 2026.
Experts warn that the unmonitored proliferation of AI tools by non-technical employees creates a critical governance gap, exposing enterprises to significant security risks, including data breaches and unauthorized access.
The Invisible AI Jungle: Enterprises Blind to Rampant Security Risks
BOULDER, Colo. – April 27, 2026 – A silent revolution is unfolding within the world's largest companies, but their security teams are largely unaware. As businesses race to embrace artificial intelligence, a new class of “citizen developers” from marketing, finance, and operations are building their own AI-powered applications and automations. While this fuels unprecedented productivity, it has also created a sprawling and dangerously unmonitored digital wilderness that experts are calling the “Enterprise AI Jungle.”
A stark new survey released today by cybersecurity firm Nokod reveals the sheer scale of this blind spot. The 2026 State of Security in Business-Built Applications and AI Agents Survey, which polled 200 enterprise Chief Information Security Officers (CISOs), found that security teams have visibility into only 44% of the AI applications, agents, and automations built by business users. This means the majority of these tools, many of which handle sensitive company and customer data, are operating completely in the shadows, outside the reach of traditional security controls.
This phenomenon, dubbed “shadow engineering,” is driven by the accessibility of powerful enterprise AI platforms like Microsoft Copilot Studio, ServiceNow, and UiPath. These low-code and no-code tools empower non-technical employees to create complex workflows and autonomous AI agents. The survey found that in many organizations, these business builders now outnumber professional software developers by a ratio of 4 to 1, with some companies reporting a ratio as high as 10 to 1. The result is a critical governance gap, with 80% of security teams admitting they lack full visibility into this new, rapidly expanding attack surface.
“Security teams are losing a race they don't even realize they are running,” said Yair Finzi, CEO and Co-Founder of Nokod, in the press release. “Entire layers of enterprise logic are emerging outside traditional oversight, creating a jungle of untracked risks. Our survey highlights that these enterprise AI tools are now supporting the most critical workflows in the company, often with zero governance.”
A Threat Substantiated by Breaches
While Nokod’s report puts a spotlight on the issue, the threat of unmanaged, business-built tools is not theoretical. It's a problem that industry analysts have been warning about for years and one that has already led to significant, real-world data breaches. Long before the current generative AI boom, a 2021 misconfiguration in Microsoft’s low-code Power Apps platform exposed over 38 million sensitive records from organizations like American Airlines, Ford, and multiple government agencies. The incident, caused by default settings that left data publicly accessible, was an early warning of how easily citizen-developed applications could lead to massive data exposure.
More recently, the risks have compounded with the integration of AI. In 2023, employees at Samsung inadvertently leaked confidential source code and internal meeting notes by using ChatGPT for assistance, feeding proprietary data directly into a public model. The problem extends beyond user error to vulnerabilities within the platforms themselves. In late 2025, a critical vulnerability in ServiceNow’s AI platform, codenamed “BodySnatcher,” allowed attackers to impersonate users and take over accounts with just an email address.
Industry analysts have been sounding the alarm. Gartner predicts that by 2030, over 40% of enterprises will suffer security incidents directly linked to unauthorized “shadow AI.” Forrester has been even more direct, warning of a “shadow AI pandemic” and noting that the risks often surpass those of traditional shadow IT because AI tools don’t just store data—they process, learn from, and transform it, making data leakage potentially permanent and untraceable.
Why Traditional Security Is Falling Short
The fundamental challenge is that the tools and processes designed to secure professionally developed software are ill-equipped to manage the AI jungle. Traditional Application Security (AppSec) programs rely on code scanning and structured development lifecycles, which do not apply to the dynamic, model-driven, and often codeless nature of business-built AI.
These new applications introduce novel attack vectors that legacy systems cannot detect. One major risk is “prompt injection,” where a malicious actor can manipulate an AI agent’s instructions through seemingly benign inputs, tricking it into leaking sensitive data or performing unauthorized actions. Another is the issue of overprivileged agents; an AI assistant built by a user with broad data access can inadvertently become a conduit for that data to be exposed to anyone who interacts with the agent.
“Without visibility into such usage, security teams face the difficult task of protecting assets they can't see or control,” commented one independent security advisor at a major cloud provider. The problem is that once sensitive data is fed into an external AI model, it may be used for retraining and become part of the model itself, effectively putting corporate intellectual property outside the organization's control forever.
This was demonstrated in a 2026 breach at the cloud platform Vercel, which was compromised via a third-party AI productivity tool an employee had connected to their corporate account. The incident, which security experts called a “textbook example of shadow AI,” allowed attackers to pivot from the unvetted AI tool into Vercel’s core systems.
The Path Forward: Governance Over Prohibition
Faced with this growing threat, the overwhelming consensus among security leaders is that attempting to ban these AI tools is both futile and counterproductive. Employees, driven by the need for efficiency, will simply find ways to use them under the radar, pushing the problem deeper into the shadows and making it impossible to manage.
The alternative is to embrace the trend while wrapping it in a new layer of intelligent governance. Nokod's survey suggests that leadership is waking up to this reality, with 90% of CISOs stating they expect to implement formal governance policies for citizen development by the end of 2026. Furthermore, 67% of organizations report they already allocate a budget for securing these tools, a figure expected to grow by 15% in the coming year.
This shift is fueling an emerging market for AI governance and security platforms designed specifically for this new ecosystem. These solutions aim to provide the visibility that security teams desperately lack, automatically discovering all AI applications and automations across the enterprise. By mapping how these tools interact with data, they can detect risks like data leakage, misconfigurations, and vulnerabilities in real time.
Rather than blocking tools outright, these platforms establish automated guardrails that allow employees to innovate safely. They can, for instance, automatically remediate a risky connection or alert a security team when an AI agent attempts to access a sensitive database. By providing this “map and guide” to the AI jungle, organizations can begin to turn a hidden risk into a governed, secure engine for innovation, ensuring that the drive for productivity does not come at the cost of security.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →