The AI Arms Race: Firms Bet Billions on Autonomous Cyber Defenses

📊 Key Data
  • 96% of senior security leaders view AI-enabled cyberattacks as a significant threat, yet less than half are confident in their defenses.
  • 48% of organizations expect to dedicate at least a quarter of their cybersecurity budget to AI solutions within two years, up from just 9% today.
  • 97% of leaders believe their competitive advantage is tied to the maturity of their agentic AI defenses.
🎯 Expert Consensus

Experts agree that AI is becoming both the primary weapon and the essential defense in cybersecurity, requiring organizations to fundamentally rewrite their security strategies with AI at the core to stay ahead of evolving threats.

14 days ago
The AI Arms Race: Firms Bet Billions on Autonomous Cyber Defenses

The AI Arms Race: Firms Bet Billions on Autonomous Cyber Defenses

NEW YORK, NY – March 19, 2026 – A new digital battlefront has emerged, where artificial intelligence is both the weapon of choice for attackers and the last line of defense for corporations. A staggering 96% of senior security leaders now view AI-enabled cyberattacks as a significant threat, yet less than half are strongly confident in their ability to defend against them, according to a new EY Cybersecurity Roadmap Study.

This crisis of confidence is fueling an unprecedented investment surge into AI-powered security, signaling a fundamental shift in how enterprises are preparing to fight the wars of the future. The study of 500 senior corporate security leaders reveals a stark reality: organizations are no longer simply updating their playbooks; they are rewriting them entirely with AI at the core.

A New Investment Paradigm

The scale of the financial pivot is dramatic. According to the EY study, the number of organizations dedicating at least a quarter of their total cybersecurity budget to AI solutions is set to quintuple in just two years, jumping from a mere 9% today to an astonishing 48%. This trend is not happening in a vacuum. Other industry analyses confirm a sector-wide consensus, with a recent PwC survey noting that AI is the top investment priority in cybersecurity budgets, and a KPMG report finding that 70% of organizations already dedicate over 10% of their cyber budgets to AI-related initiatives.

The spending spree is a direct response to a threat landscape that has been supercharged by artificial intelligence. Nearly half (48%) of leaders surveyed by EY estimate that at least a quarter of all cyber incidents their organization faced in the past year were enabled by AI. Attackers are leveraging AI to create highly convincing deepfake scams, automate vulnerability discovery, and personalize phishing campaigns at a scale and speed that overwhelm traditional human-led defenses.

"Security leaders have been rapidly bolting on AI solutions to stay ahead of AI-driven cyber threats, but their lack of confidence in defenses signals a need for reimagining security architecture with AI at the core," says Ganesh Devarajan, EY Americas Consulting Cyber Risk Practice Leader. "Cyber leaders can't just automate yesterday's defenses; they must move toward an AI-native posture that embeds cyber as a foundational layer of trust across enterprise AI."

Rise of the Autonomous Agents

At the heart of this new AI-native posture is the rise of 'agentic AI'—autonomous systems that can independently plan, orchestrate security tools, and execute complex, multi-step defensive actions with minimal human oversight. Unlike earlier AI models that merely assisted analysts, these agents are designed to reason and respond like an entire security team, operating at machine speed.

According to the EY data, 97% of leaders believe their competitive advantage in the near future is directly tied to the maturity of their agentic AI defenses. The adoption rates are projected to soar. The number of security leaders who expect agentic AI to largely run critical functions is set to roughly double in two years for areas like advanced persistent threat (APT) detection (from 30% to 62%) and real-time fraud detection (from 32% to 58%).

"Budget increases create the opportunity for cyber leaders to strategically invest to move from automating simple tasks to advanced agentic AI systems that can undertake complex, multi-step actions across products and ecosystems simulating human responses to attacks," Devarajan notes.

This move toward autonomy is a necessary evolution. Recent real-world incidents, such as a sophisticated $25 million fraud executed using AI-generated deepfake video to impersonate a CFO, demonstrate that attacks can now unfold faster than human teams can react. Similarly, AI-powered scripts have been used to systematically probe and exploit corporate APIs in major data breaches, adapting in real-time to evade detection.

The Governance Gap: A Bridge to Trust or Path to Peril?

While organizations are racing to deploy these powerful AI defenders, a critical vulnerability has emerged not in technology, but in policy. The EY study exposes a profound 'governance gap'. While virtually all leaders (98%) agree that an AI cybersecurity governance framework is essential for responsible AI use, a mere 20% of organizations have successfully optimized and embedded these frameworks into their corporate culture.

The majority are lagging, with 51% reporting their framework is only embedded in key processes and 26% stating it is rolled out but not yet fully integrated across all business units. This gap between policy and practice is fraught with peril. The very autonomy that makes agentic AI a powerful defender also introduces novel risks, including the potential for cascading compromises, tool misuse, and data exfiltration if not properly governed.

Frameworks like the NIST AI Risk Management Framework (AI RMF) provide a roadmap for organizations to manage these risks systematically. The AI RMF encourages a holistic approach through its core functions—Govern, Map, Measure, and Manage—pushing organizations to build trustworthy AI that is transparent, accountable, and secure throughout its lifecycle. Without such structured oversight, the deployment of powerful AI agents could lead to unintended consequences, eroding the very trust they are designed to build.

"The proliferation of AI cyber threats is an operational reality that puts the limitations of legacy frameworks on full display," says Devarajan. "Organizations must move beyond standalone cyber defenses and risk management toward a system of architecting trust across governance, compliance and ethics that turns AI from a risk into a competitive advantage."

Sector: Financial Services
Theme: Artificial Intelligence Generative AI Regulation & Compliance
Event: Corporate Action
Product: AI & Software Platforms Financial Products
Metric: Financial Performance

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 21949