AI's Double-Edged Sword: The $10 Trillion Cybersecurity Question
As AI drives innovation, it also fuels a projected $10 trillion in cyber losses. Experts warn reactive security is failing and a proactive shift is vital.
AI's Double-Edged Sword: The $10 Trillion Cybersecurity Question
NEW YORK, NY – December 18, 2025 – The rapid integration of Artificial Intelligence into the business world promises unprecedented efficiency and innovation. Yet, this technological leap forward casts a long and costly shadow. As businesses race to adopt AI, they are simultaneously creating new, complex security vulnerabilities that threaten to dwarf previous cyber threats, contributing to a global cybercrime cost projected by researchers at Cybersecurity Ventures to hit a staggering $10.5 trillion annually by 2025.
This paradox—where a tool for progress becomes a vector for attack—was the focus of a recent discussion between Tenable Co-CEO Steve Vintz and Creighton University’s Dr. Dustin Ormond. The experts, speaking with Today's Marketplace, warned that the current cybersecurity paradigm is ill-equipped for the AI era, and without a fundamental shift in strategy, businesses risk catastrophic losses.
The Failure of Reactive Security
For years, the dominant approach to cybersecurity has been reactive. Organizations have built digital fortresses and alarm systems, investing heavily in tools designed to detect and respond to breaches after they occur. According to Steve Vintz, this strategy is no longer tenable. “Over the years, a lot of the investment in security has been on reactive detect and respond solutions,” he stated. “That means companies are actively looking for breaches and responding to them, and 96% of all dollars are spent on detect and respond.”
This reactive posture, Vintz explained, has led directly to the trillions of dollars in cyber losses seen this year. The problem is compounded by the sheer complexity of modern digital estates. “We have a sprawling ecosystem of traditional IT (servers, desktops, and laptops), as well as public and private cloud environments, and IoT, not to mention identities that both humans and machines have access to,” he noted. In this fragmented landscape, waiting for an alarm to sound is a losing game.
Attackers, now armed with AI, can move faster and more intelligently than ever before. Traditional security teams are left in a constant state of firefighting, overwhelmed by the volume of alerts and struggling to distinguish real threats from background noise. This approach fails to address the root causes of vulnerability, leaving critical systems exposed until it is too late.
The Imperative of Proactive Exposure Management
The antidote to this reactive cycle, experts argue, is a proactive strategy known as exposure management. Vintz advocates for this approach, describing it as a move “from reactive firefighting to proactive fireproofing.” Instead of waiting for an attack, exposure management focuses on continuously discovering, assessing, and prioritizing vulnerabilities across the entire attack surface before they can be exploited.
This modern strategy is embodied by platforms like Tenable One, which has been recognized as a leader by industry analyst firms like Forrester and Gartner. Such systems provide a unified view of all assets and their potential exposures—whether in the cloud, on-premises, or within an employee’s laptop. By leveraging AI, these platforms can analyze countless vulnerabilities and prioritize the handful that pose a genuine, immediate threat to critical business operations. This allows security teams to focus their limited resources where they will have the most impact.
Key to this approach is attack path analysis, which visualizes how an attacker could chain together seemingly low-risk vulnerabilities to reach a high-value asset, like a customer database. Furthermore, these platforms are now being designed to tackle the rise of “shadow AI”—the unsanctioned use of AI tools by employees—by discovering these applications and assessing the risks they introduce. By understanding their full exposure, organizations can systematically reduce their risk profile rather than just responding to incidents.
AI's Weakest Link: The Human Element
While advanced technology is crucial, both Vintz and Dr. Ormond concluded that the most persistent vulnerability remains the human element. AI doesn't just create new technical exploits; it supercharges the methods used to manipulate people. “A lot of the vulnerabilities happen because AI agents can now do a lot of the things that people used to need sophisticated skills to do,” said Dr. Ormond, an Associate Professor of Business Intelligence Analytics at Creighton University whose research focuses on behavioral cybersecurity.
AI drastically lowers the barrier to entry for creating sophisticated attacks. Threat actors can now generate hyper-realistic phishing emails, deepfake videos, and cloned voices at scale, making it nearly impossible for the average employee to distinguish between a legitimate request from a CEO and a fraudulent one from an AI-powered scam. These attacks bypass technical defenses by targeting human psychology, curiosity, and trust.
The danger is not just external. Employees themselves can inadvertently create massive security holes. “When people take a company's information and plug it into these AI agents, they don't realize how much information they're giving away,” Dr. Ormond warned. This casual use of public large language models can feed sensitive intellectual property, strategic plans, and private customer data directly into third-party systems with weak data protection, creating a permanent, unpatchable exposure.
Forging a Resilient, AI-Aware Culture
Addressing the AI threat requires a two-pronged approach that combines proactive technological defense with robust human education. As AI-driven threats evolve, so too must an organization's security culture. Now more than ever, companies must invest in creating an informed environment where employees are the first line of defense, not the weakest link.
This means moving beyond annual compliance training. Effective education must be continuous, practical, and tailored to the specific threats posed by AI. Employees need to be trained to recognize the tell-tale signs of AI-generated phishing, to be skeptical of urgent digital requests, and to understand the concrete risks of inputting company data into unauthorized AI tools. A clear, well-communicated policy on the acceptable use of AI is no longer optional.
To guide this effort, organizations can turn to emerging industry standards like the NIST AI Risk Management Framework, which provides a structure for governing, mapping, and managing AI-related risks, and the OWASP Top 10 for Large Language Model Applications, which details critical vulnerabilities like prompt injection and sensitive information disclosure. By integrating these frameworks with proactive exposure management and a deep investment in their people, businesses can begin to harness the power of AI without succumbing to its perils. In this new era, a vigilant and well-trained workforce is the ultimate cybersecurity asset.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →