Banks Face AI Risks With Critical Knowledge Gaps, Survey Finds

📊 Key Data
  • 79% of bank leaders identify fraud as a top risk in 2026, with AI as the primary vector
  • 84% fear AI-powered scams targeting customers, 77% worry about attacks on employees
  • 33% of executives admit they do not understand agentic AI at all
🎯 Expert Consensus

Experts warn that banks must urgently bridge critical knowledge gaps in AI, particularly around agentic systems, to mitigate fraud, governance, and systemic risks as adoption accelerates.

2 days ago

Banks Face AI Risks With Critical Knowledge Gaps, Survey Finds

NASHVILLE, Tenn. – March 31, 2026 – As financial institutions race to adopt artificial intelligence, a new report reveals a troubling paradox: bank leaders are grappling with significant knowledge gaps concerning the very technology they are implementing, even as concerns over AI-driven fraud, strategic competition, and credit risk reach new heights.

Bank Director's 2026 Risk Survey, sponsored by Baker Tilly, paints a picture of an industry at a crossroads. While embracing innovation, bank executives and board members are acutely aware of the emerging threats. A striking 79% of respondents identified fraud as a top risk for 2026, with the vast majority pointing directly to AI as the vector. Eighty-four percent are most concerned about AI-powered scams targeting their customers, while 77% fear similar attacks on their own employees and organization.

The Agentic AI Blind Spot

The survey's most alarming finding may be the disconnect between AI adoption and executive understanding. While most leaders reported a baseline familiarity with concepts like machine learning, a full third admitted to not understanding "agentic AI" at all. This advanced form of AI, which involves autonomous agents that can make decisions and execute multi-step tasks with minimal human oversight, is poised to revolutionize finance but also introduces profound governance challenges.

Unlike generative AI, which creates content, agentic systems can act on that content to achieve goals—from executing trades to processing loan applications. This autonomy, however, creates risks that many banks are unprepared for. A "rogue" or compromised AI agent could trigger financial actions at machine speed, magnifying errors and potentially creating systemic disruptions. This highlights a critical need for robust oversight, which is impossible without a foundational understanding of the technology.

"Banks need a baseline level of understanding so everyone knows what's in play," said Mark Wuchte, Baker Tilly's financial services risk advisory leader, in the press release. "Without that foundation, you risk people inadvertently using tools outside of the bank's oversight. Governance needs to be part of the conversation from the very beginning."

The risks of this knowledge gap are multifaceted. Inadequately governed agentic systems could lead to regulatory breaches in areas like credit decisions, introduce subtle but significant data biases, and expand the institution's cybersecurity attack surface. Experts warn that traditional "human-in-the-loop" controls may be insufficient to manage systems designed to operate autonomously at high speeds.

A Tangled Web of Threats: From Cyber to Credit

The anxiety surrounding AI is intertwined with a broader, more complex risk landscape. The survey shows that while banks are taking cybersecurity seriously—with 89% of CEOs and tech executives conducting tabletop exercises of their incident response plans—these drills are uncovering significant vulnerabilities. The most common gaps identified were an overreliance on a single individual or function (36%) and failures in internal communication (35%), weaknesses that could be catastrophic during a sophisticated, AI-driven cyberattack.

This is compounded by a lack of external perspective at the board level. While 79% of boards review and approve their bank's cybersecurity strategy, less than half (47%) invited outside experts to discuss emerging trends over the past year. This insularity could leave institutions vulnerable to novel threats that internal teams have not yet encountered.

Simultaneously, traditional financial risks are intensifying. Concern over credit risk has climbed, with 60% of respondents now naming it a top risk, up from 51% a year ago. The commercial real estate (CRE) sector is a primary source of this anxiety. Lingering post-pandemic shifts to remote work have depressed demand for office space, while a high-interest-rate environment has squeezed property valuations and made refinancing a significant challenge. With a wave of CRE loans originated in a low-rate era now maturing, banks are bracing for potential defaults, particularly those with high portfolio concentrations in the sector, a concern for 38% of respondents.

Shifting Strategies and Regulatory Questions

The competitive pressure from AI is also reshaping strategic priorities. The survey reveals a sharp increase in concern over strategic risk, which jumped to 42% from 30% in the prior year. This is largely fueled by the rise of fintech firms and non-bank competitors that are leveraging AI to offer hyper-personalized customer experiences, achieve greater operational efficiency, and deploy more sophisticated credit assessment models. These nimble challengers, unburdened by legacy systems, are setting new customer expectations and threatening to erode the market share of traditional institutions.

"As the financial services landscape continues to broaden, it's no surprise to see that bank leaders have grown more concerned about how competitors could threaten market share," noted Emily McCormick, vice president of editorial & research at Bank Director. "Banks will need to pursue their best path for success as use cases for AI grow clearer."

Paradoxically, as these complex technological and strategic risks mount, concern over regulatory risk has plummeted, falling from 55% last year to just 28%. This decline comes despite findings that 35% of bankers felt their most recent examiner was inexperienced, and 38% believed their primary regulator was understaffed. This raises a critical question: Are banks growing complacent about regulation at the precise moment that oversight is becoming more complex?

Regulators like the Federal Reserve, OCC, and CFPB are not ignoring AI. They are actively working to apply existing rules for risk management, model governance, and consumer protection to AI systems. The CFPB has been particularly clear that there is no "fancy new technology" exemption from laws preventing discrimination or unfair practices. The evolving landscape suggests that regulatory scrutiny is transforming, not disappearing, and banks that fall behind on AI governance may find themselves facing significant compliance challenges.

Bridging the Governance Gap

Addressing these interconnected challenges requires a fundamental shift in governance and oversight. The survey's findings underscore an urgent need for banks to move beyond passive approval of technology strategies to active, informed engagement. For AI, this means establishing robust governance frameworks that are integrated into enterprise-wide risk management.

Best practices, aligned with frameworks like the NIST AI Risk Management Framework, call for cross-functional governance committees that include legal, compliance, risk, and technology leaders. These bodies are responsible for setting clear policies, vetting AI projects for ethical and bias risks, and ensuring that systems are transparent and explainable.

On the cybersecurity front, the data suggests a clear need for boards to seek more external expertise. Bringing in outside specialists provides an independent perspective on emerging threats and can help bridge the internal knowledge gaps that the survey highlights, particularly around advanced topics like agentic AI. This external validation is a critical component of a mature cybersecurity posture. Ultimately, navigating the risks of 2026 and beyond will require a commitment to continuous education, from the server room to the boardroom, ensuring that strategy and oversight keep pace with the rapid evolution of technology.

Theme: Cybersecurity & Privacy Geopolitics & Trade Regulation & Compliance Agentic AI Machine Learning Artificial Intelligence
Metric: Financial Performance
Sector: AI & Machine Learning Cybersecurity Fintech

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 23781