AI Security Lags, Eroding Confidence as Adoption Accelerates
Event summary
- 32% of AI/LLM vulnerabilities are rated 'high-risk,' 2.7x the rate of other applications.
- Only 38% of high-risk LLM vulnerabilities are resolved, the lowest resolution rate across all application types.
- 20% of organizations report experiencing an LLM security incident in the past year, with 18% unsure and 19% declining to answer.
- Security professional confidence in their ability to manage AI security risks has dropped from 64% to 51% year-over-year.
- Organizations are seeing an eight-month gap (249 days) in remediation speeds for high-risk vulnerabilities between top and bottom performers.
The big picture
The Cobalt report underscores a growing disconnect between the rapid adoption of AI and the ability of security teams to effectively manage the associated risks. The low resolution rates for LLM vulnerabilities, coupled with declining security confidence, suggest a systemic challenge that extends beyond technical fixes and requires a fundamental shift in security strategy. This trend is likely to drive increased investment in offensive security services and potentially influence regulatory scrutiny of AI deployments.
What we're watching
- Governance Dynamics
- The widening gap between executive perception and practitioner reality regarding SLA adherence will likely intensify pressure for improved security processes and potentially trigger internal restructuring.
- Regulatory Headwinds
- The lack of vendor-led fixes for LLM vulnerabilities will accelerate calls for regulatory oversight and potentially mandate specific security controls for AI deployments.
- Execution Risk
- The observed eight-month remediation gap highlights a critical execution risk; organizations with slower remediation cycles will face disproportionately higher exposure to security incidents.
