Cyberattacks Evolve: Hackers Now Target Trust, Not Just Technology

📊 Key Data
  • 61% of Business Email Compromise (BEC) attacks are now Vendor Email Compromise (VEC)
  • 58% of attacks analyzed were phishing, with 20% using redirect chains
  • $2.7 billion in losses reported from BEC scams in the last reporting period
🎯 Expert Consensus

Experts agree that social engineering, particularly through Vendor Email Compromise (VEC), has become the dominant cyber threat, requiring advanced behavioral AI defenses to counter the exploitation of human trust and workflows.

8 days ago
Cyberattacks Evolve: Hackers Now Target Trust, Not Just Technology

Cyberattacks Evolve: Hackers Now Target Trust, Not Just Technology

LAS VEGAS, NV – April 22, 2026

A seismic shift is underway in the world of cybercrime. Attackers, increasingly sophisticated in their methods, are moving away from exploiting purely technical vulnerabilities and are instead targeting the most unpredictable and valuable asset in any organization: its people. A new report reveals that cybercriminals are weaponizing trust, human behavior, and routine business workflows to bypass traditional security defenses with alarming success.

The findings, detailed in the 2026 Attack Landscape Report by behavioral AI security firm Abnormal AI, are based on an analysis of nearly 800,000 email attacks across more than 4,600 organizations. The report paints a clear picture of a threat landscape where social engineering has become the dominant attack vector, underscoring a trend that security experts have warned about for years. The human element, once considered a secondary concern to firewalls and software patches, is now the primary battlefield.

The New Face of Cybercrime: Exploiting Trust and Workflows

Modern cyberattacks are less about brute-force code-breaking and more about psychological manipulation. The report highlights that attackers are meticulously studying how organizations operate, then inserting themselves into trusted processes. This evolution is most apparent in the dramatic rise of Vendor Email Compromise (VEC), which now accounts for a staggering 61% of all Business Email Compromise (BEC) attacks. Instead of impersonating a CEO to request a wire transfer, criminals now find it more effective to pose as a legitimate supplier or partner.

This trend is echoed across the industry. The most recent Verizon Data Breach Investigations Report (DBIR) identifies social engineering as a persistent and dominant attack pattern, while the SANS Institute consistently flags it as the top human-related security risk. Attackers are exploiting the inherent trust that facilitates modern business. They understand that an email from a known vendor is far less likely to raise suspicion than one from an unknown source, allowing them to slip past both human and technological guards. The core of the strategy is to appear not just legitimate, but completely mundane and part of the daily operational noise.

Phishing remains the most common entry point, constituting 58% of all attacks analyzed in the Abnormal AI report. However, these are not the poorly-worded emails of the past. Modern phishing campaigns are highly targeted and employ advanced evasion techniques. Over one-fifth of these attacks now use redirect chains, routing victims through multiple benign-looking URLs to obscure the final malicious destination from legacy security tools that scan for known bad links.

The High Stakes of Vendor Fraud

Within the growing threat of VEC, one specific tactic stands out as exceptionally dangerous. According to the report, requests to update a vendor’s billing account information carry a 26.5% compromise rate. This is dramatically higher than the rate for routine invoice inquiries, which is less than 1%. The reason lies in the psychology and process of business finance. A standard invoice can get lost in a high volume of similar payments, but a request to change fundamental payment details often triggers a more deliberate, manual process.

Attackers have recognized this and are willing to invest significantly more time and effort into these high-stakes scenarios. They may spend weeks or months conducting reconnaissance, compromising a real vendor account through a separate attack, or creating a highly convincing impersonation complete with fabricated email histories. The potential payoff—rerouting all future legitimate payments to a fraudulent account—justifies the added complexity. This represents a calculated business decision by criminal enterprises.

The financial fallout is immense. The FBI’s Internet Crime Complaint Center (IC3) has consistently reported that BEC scams, of which VEC is now the dominant form, result in billions of dollars in losses annually. Recent industry data shows these losses topped $2.7 billion in the last reporting period. Furthermore, security analysts note that attackers are increasingly leveraging generative AI to create flawless email text, synthetic voices for follow-up calls, and even deepfake video messages to enhance the credibility of their fraudulent requests, making detection even more challenging for unsuspecting employees.

A Tailored Threat: How Attacks Adapt to Their Targets

The most sophisticated cybercriminals do not use a one-size-fits-all approach. Instead, they tailor their attacks to the specific structure and culture of their target organization. Abnormal AI's report reveals a stark difference in tactics between small and large companies. In small organizations, where executives are often more visible and directly involved in finances, VIP impersonation accounts for 43% of internal impersonation attacks. A fraudulent request seemingly from the CEO is both plausible and effective.

In large enterprises, however, this approach is less successful. Layered approval processes, greater employee separation from the C-suite, and increased security awareness training mean that a sudden wire request from the CEO is more likely to be flagged. Consequently, attackers shift their strategy toward impersonating lower-level employees or colleagues, crafting contextually relevant requests that align with established departmental workflows.

This adaptability is also seen in sector-specific targeting. The report highlights higher education as a uniquely vulnerable environment. Open, collaborative networks and high user turnover create ideal conditions for attackers. Nearly one in eight phishing attacks reaching student inboxes originates from a compromised internal account, and a full third of all BEC attacks in the sector are lateral—meaning an attacker uses one compromised account to move sideways and attack others within the same institution. This creates a cascading effect that is difficult to contain.

The Rise of Behavioral AI as a Defense

As attackers focus on exploiting human behavior, traditional security tools that rely on known signatures and blocklists are proving insufficient. These legacy systems were not designed to assess intent, context, or trust—the very elements that define modern social engineering attacks. This gap has led to the emergence of a new defensive paradigm: behavioral AI.

"Modern email attacks are shaped by the institutions they target," said Piotr Wojtyla, Head of Threat Intel and Platform at Abnormal AI. "Attackers are no longer just trying to circumvent security; they are exploiting the very mechanics of how we work... When that happens, detection becomes a behavioral challenge, requiring AI that continuously learns how people and organizations actually operate."

This approach represents the new frontier in cybersecurity. Instead of looking for known threats, behavioral AI platforms integrate directly with communication systems like email and collaboration tools to build a baseline of normal behavior. They learn the unique communication patterns, relationships, and workflows of every individual and the organization as a whole. When a deviation occurs—such as a vendor suddenly requesting a billing account change from a new email address, using slightly different language, or at an unusual time—the AI can flag it as a high-risk anomaly, even if the email contains no malicious links or attachments.

The industry is rapidly moving in this direction. Market analysis from firms like Gartner shows significant investment and planning for "Cybersecurity AI assistants," with a majority of organizations either piloting or planning to adopt such technologies. In a world where the biggest threat is an email that looks legitimate, the best defense is a system that knows what "legitimate" truly means. In this new landscape, understanding the rhythm of normal business has become the most critical defense against those who would exploit it.

Sector: Software & SaaS AI & Machine Learning Fintech
Theme: Generative AI Automation AI Governance Threat Landscape Data Breaches Ransomware Trade Wars & Tariffs
Event: Acquisition Antitrust Investigation
Product: ChatGPT Copilot
Metric: Revenue EBITDA

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 27328