Enterprises Deploy GenAI at Pace, but Security and Governance Lag
Event summary
- 52% of enterprises have fully or partially deployed GenAI, but only 20% have reached AI maturity in cybersecurity.
- Only 41% of organizations have AI-specific data privacy policies in place.
- 62% of respondents find it difficult to minimize model and bias risks in language model development.
- 59% say AI makes it more difficult to comply with privacy and security regulations.
- 47% of organizations believe their AI models can learn robust norms and make safe decisions autonomously.
The big picture
The study highlights a critical disconnect between the rapid adoption of GenAI and the foundational security and governance measures needed to manage its risks. As AI systems become more autonomous and embedded in critical operations, organizations face increasing challenges in ensuring trust, compliance, and long-term business value. The findings underscore the need for secure information management, clear governance frameworks, and continuous monitoring to ensure AI systems remain trustworthy and compliant.
What we're watching
- Governance Dynamics
- How enterprises will bridge the gap between rapid GenAI adoption and robust governance frameworks.
- Regulatory Headwinds
- Whether the lack of AI-specific data privacy policies will trigger regulatory scrutiny.
- Operational Reliability
- The pace at which organizations can address errors in AI decision rules and data inputs.
Related topics
