The AI Blind Spot: Enterprises Adopt Agents They Can't See or Control
A new report reveals a shocking gap: as autonomous AI agents flood enterprises, only 21% have visibility, creating a massive, unaddressed security risk.
The AI Blind Spot: Enterprises Adopt Agents They Can't See or Control
SAN FRANCISCO, CA – December 08, 2025 – A silent revolution is underway inside the world’s largest companies, but it’s happening in the dark. Autonomous AI systems, known as “agents,” are being rapidly integrated into mission-critical business functions, yet a new report reveals that the vast majority of organizations are flying blind. According to the inaugural State of Agentic AI Security 2025 report from cybersecurity firm Akto, a staggering 79% of enterprises lack full visibility into what these powerful new agents are doing, what data they are accessing, or what tools they are using.
The findings paint a stark picture of a widening chasm between innovation and oversight. “AI agents didn't enter the enterprise quietly; they arrived at full force in 2025,” said Ankita Gupta, CEO and Co-Founder of Akto, in the report's announcement. “This report shows a clear gap between the adoption of AI and security readiness. That mismatch is now the biggest enterprise risk of 2026.”
The New Reality of Autonomous AI
Unlike the reactive AI models of the past, such as chatbots that respond to specific queries, agentic AI represents a paradigm shift. These are goal-driven, autonomous systems capable of reasoning, planning, and executing multi-step tasks to achieve a high-level objective. They are not just tools; they are digital workers being deployed to manage entire sales lifecycles, streamline complex project management workflows, and even conduct autonomous security investigations.
The adoption rate is breathtaking. Akto’s research, which surveyed hundreds of security leaders from major corporations, found that agentic AI has moved far beyond the experimental phase. Nearly 40% (38.6%) of organizations have already deployed agents at a departmental or enterprise-wide scale. Another 23.8% are running active pilots, and 31.7% are in deep experimentation. In short, the technology is already deeply embedded.
This rapid, often decentralized adoption has created a phenomenon some experts call “Shadow AI.” Much like the “Shadow IT” of the past, where departments would procure their own software without corporate approval, development teams are now spinning up AI agents to boost agility and efficiency. While the productivity gains are compelling, this bottom-up approach means security and platform teams often have no central inventory or understanding of the autonomous systems operating within their own environments.
A Widening Chasm Between Speed and Safety
The core of the problem lies in a fundamental disconnect. Development and business teams are embracing agentic AI for its speed and power, while security teams, equipped with traditional tools, are struggling to keep pace. “The biggest concern for AppSec is the speed,” noted Bala Thripura Akasam, Application Security Manager at Tapestry, in the report. “Agentic AI is being adopted far faster than security teams can assess or secure the risks.”
This isn't for lack of awareness. The report indicates that 65% of organizations believe that implementing action-level guardrails and runtime controls is a critical priority. However, intention is not translating to action, as only half of those organizations have actually managed to implement them. Most are still relying on outdated methods like manual code reviews or after-the-fact log analysis—processes wholly unsuited for governing systems designed to act autonomously and unpredictably.
This creates a dangerous blind spot. “Visibility is the biggest gap today,” stated Suhel Khan, CISO at Chargebee. “You can't govern or enforce guardrails if you don't know what your agents are doing. Without observability, every control is guesswork.”
Understanding the Invisible Threat
Operating intelligent, autonomous systems without foundational observability is akin to allowing an employee to work with sensitive company data without any supervision or access controls. The risks are not merely theoretical; they represent entirely new attack surfaces that conventional security measures were not designed to handle.
These threats include:
- Prompt Injection: An attacker can craft a malicious prompt that tricks an agent into bypassing its safety protocols, potentially leading it to delete data, grant unauthorized access, or execute harmful commands.
- Tool Poisoning: AI agents rely on external tools and data sources to perform tasks. If an attacker compromises one of these tools, they can “poison” the agent with bad data or malicious code, causing cascading failures or data breaches.
- Sensitive Data Leaks: Without strict controls, an agent could be manipulated into accessing and exfiltrating proprietary intellectual property, customer PII, or financial records, leading to severe compliance violations under regulations like GDPR and HIPAA.
- Cascading Hallucinations: An error or fabrication from one agent can be accepted as fact by another, leading to a chain reaction of bad decisions and corrupted data that spreads silently across the enterprise.
Securing these systems requires a new approach focused on the underlying infrastructure, particularly the Model Context Protocol (MCP) servers that act as the gateway between AI agents and their tools. MCP security functions like a specialized firewall, allowing organizations to enforce policies, block malicious inputs, and create a complete audit trail of every action an agent takes.
Charting a Path to Governance
The Akto report is more than a warning; it’s a roadmap for what comes next. As enterprises move into 2026, the consensus among surveyed leaders is that ad-hoc AI security will no longer be tenable. The coming year is expected to bring a wave of formalization as organizations race to close the visibility gap.
The key initiatives on the horizon include establishing formal ownership of AI security across Application Security and Platform Engineering teams, creating standardized permission boundaries for all agent workflows, and mandating action-level logging for every single agent invocation. Furthermore, continuous, automated “red teaming”—where AI is used to test the security of other AIs—is expected to become a baseline requirement.
Enterprises are moving toward a future where AI agents are classified by risk level, scope, and data access rights, much like employees are today. The overwhelming expectation is that by the end of 2026, Agentic AI Security will be considered as fundamental to enterprise operations as Cloud Security and Identity and Access Management (IAM) are now. The era of agentic AI is here, and the findings from Akto's report make it clear that building a secure foundation is not just a technical requirement, but a fundamental business imperative for 2026 and beyond.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →