AI Security Lags: Most Firms Lack Shutdown Response Plans
Event summary
- ISACA’s 2026 AI Pulse Poll surveyed over 3,400 digital trust professionals globally.
- 56% of respondents stated they do not know how quickly they could halt an AI system in a security incident.
- Only 36% of organizations require human approval for most AI-generated actions.
- A significant 32% of respondents believe their board/executives would be responsible if an AI system caused harm.
The big picture
The findings highlight a critical disconnect between the rapid adoption of AI and the maturity of governance and security practices. This gap creates substantial operational and legal risk for organizations, particularly as AI becomes more deeply integrated into core business processes and decision-making. The lack of clarity around accountability and incident response underscores a systemic challenge in managing the emerging risks associated with AI deployment.
What we're watching
- Governance Dynamics
- The shift in perceived responsibility for AI-related harm, with boards and executives increasingly seen as accountable, suggests a broader governance overhaul is likely to be required across organizations.
- Regulatory Headwinds
- The lack of clear disclosure requirements around AI usage and the uncertainty surrounding incident response will likely draw increased scrutiny from regulators, potentially leading to stricter compliance mandates.
- Execution Risk
- The inability to rapidly shut down AI systems in a security incident exposes organizations to significant operational and reputational risk, which will necessitate investment in automated response capabilities and robust incident management processes.
Related topics
