Audit Leaders Sound Alarm on AI Fraud as Corporate Defenses Lag
- 85% of senior internal audit leaders view AI-driven fraud as a moderate to high risk, but fewer than 40% believe their departments are adequately equipped to detect or respond to it.
- 88% of audit leaders identify AI-powered phishing attacks as a top risk, with a 703% rise in credential phishing attacks in late 2024.
- $25 million lost in a 2024 deepfake impersonation scam involving AI-generated video calls.
Experts warn that while AI-driven fraud is widely recognized as a major risk, most organizations are dangerously unprepared, requiring urgent investment in technology, skills training, and cross-departmental collaboration to mitigate evolving threats.
Audit Leaders Sound Alarm on AI Fraud as Corporate Defenses Lag
LAKE MARY, Fla. – February 17, 2026 – A stark warning has been issued to corporate leaders across North America: while the threat of artificial intelligence-enabled fraud is widely recognized as a major and growing risk, the teams responsible for guarding against it are dangerously unprepared. A new report from The Internal Audit Foundation and AuditBoard reveals that while a staggering 85% of senior internal audit leaders view AI-driven fraud as a moderate to high risk, fewer than four in ten believe their departments are adequately equipped to detect or respond to it.
The joint report, based on a survey of over 370 audit executives, exposes a critical 'readiness gap' that leaves organizations vulnerable to increasingly sophisticated and scalable attacks. The findings underscore an urgent need for businesses to overhaul their defenses, prioritizing investment in technology, advanced skills training, and cross-departmental collaboration to keep pace with a rapidly evolving threat landscape where traditional safeguards are becoming obsolete.
"AI is reshaping how organizations operate, driving greater efficiency, automation, and insight," said Anthony Pugliese, President and CEO of The Institute of Internal Auditors (The IIA). "At the same time, those capabilities are increasingly being leveraged to enable more sophisticated and scalable fraud. As adoption accelerates, internal audit has a critical role to play in helping organizations understand these risks, identify emerging threats, and respond effectively."
The New Face of Fraud: From Phishing to Deepfake Heists
The nature of corporate fraud is transforming at an alarming rate, moving beyond simple scams to highly convincing, AI-powered campaigns. The survey found that AI-powered phishing attacks are the most-cited concern, with 88% of audit leaders identifying them as a top risk. These are not the clumsy, typo-ridden emails of the past; generative AI now allows criminals to craft perfectly worded, context-aware messages at scale, leading to a reported 703% rise in credential phishing attacks in late 2024.
Beyond phishing, leaders are deeply concerned about AI-generated fabricated invoices (65%), automated social engineering (58%), and the chilling rise of deepfake audio or video impersonation (45%). The potential damage from these deepfake attacks is no longer theoretical. In a well-publicized 2024 incident, a finance worker at a multinational firm in Hong Kong was duped into transferring over $25 million after attending a video call with what he believed were senior executives, who were in fact AI-generated deepfakes.
Other emerging threats keeping auditors awake at night include the use of AI to insert malicious code (41%), forge legal contracts (29%), and create synthetic identities for fraudulent job applications (28%). These tools, once the domain of nation-states, are now available through a booming 'AI-Fraud-as-a-Service' underground market, making sophisticated attacks accessible to a wider range of criminals.
An Uphill Battle Against Skills and Technology Gaps
The report makes it clear that awareness of the threat is not translating into preparedness. The primary barriers holding audit functions back are a lack of appropriate technology and tools (57%) and an insufficient number of staff with relevant skills or expertise (55%). Competing organizational priorities and limited budgets further compound the challenge, leaving internal audit teams fighting a 21st-century war with 20th-century weapons.
"While the awareness of AI-enabled fraud is high, the 'readiness gap' remains a significant vulnerability for most organizations," warned Richard Chambers, Senior Advisor at AuditBoard. "Internal audit leaders must take disciplined action by equipping their teams with the right technology, continuous training, and access to cross-functional data. In a world of automated, AI-powered threats, manual fraud detection is no longer a viable defense."
Despite these hurdles, many audit departments are not standing still. The survey notes that 57% are actively assessing control weaknesses that could enable fraud, and 51% are advising management on AI governance and policy. However, these efforts are often hampered without the necessary resources to fully implement and monitor effective controls.
The AI Paradox: A Tool for Both Crime and Compliance
Ironically, the same technology fueling this new wave of fraud is also seen as the most potent weapon to combat it. The report highlights a significant push within the profession to adopt AI for defensive purposes. An overwhelming 83% of respondents expect their own internal audit functions to increase their use of AI over the next year.
Currently, AI is most frequently leveraged in audit planning and reporting, but its application in continuous risk assessment and real-time fieldwork is growing. Advanced, AI-powered Governance, Risk, and Compliance (GRC) platforms can analyze massive datasets in real-time, detecting subtle anomalies and suspicious patterns that would be invisible to human auditors. These systems can flag unusual transaction patterns, identify potential insider threats, and continuously monitor for compliance breaches, shifting the audit function from a periodic, backward-looking review to a proactive, forward-looking defense.
This dual nature of AI presents a strategic paradox for businesses. To defend against AI-powered attacks, organizations must themselves become proficient in deploying AI, creating a technological arms race where falling behind means leaving the door wide open to catastrophic financial and reputational damage. Global cybercrime costs are already projected to hit $10.5 trillion annually by 2025, with generative AI expected to contribute tens of billions in fraud-related losses.
A Call for Action: Governance, Collaboration, and Regulation
Closing the readiness gap requires more than just new software. The report emphasizes that the most critical actions involve people and processes. Audit leaders are prioritizing continuous, updated training to build AI-related skills within their teams. They also stress the need for stronger collaboration between the audit function and IT, cybersecurity, and risk management teams to create a unified defense strategy.
This internal push is being reinforced by external pressure from regulators. Government bodies like the U.S. Federal Trade Commission (FTC) are cracking down on AI-driven impersonation scams, while the Securities and Exchange Commission (SEC) is targeting "AI-washing," where companies mislead investors about their AI capabilities. Furthermore, comprehensive regulations like the European Union's AI Act are setting new global standards for AI governance, transparency, and risk management that will impact North American companies.
For organizations, the message is clear: addressing the threat of AI-enabled fraud is not solely the responsibility of the internal audit department. It demands a top-down strategic commitment from the C-suite and the board to foster a culture of risk awareness, invest in both human and technological capabilities, and build a resilient enterprise capable of navigating the complex opportunities and threats of the AI era.
