Cloud Security Alliance

The Cloud Security Alliance (CSA) is a global not-for-profit organization dedicated to promoting best practices for security assurance within cloud computing and artificial intelligence, while also providing education on securing all forms of computing. Founded in 2008, its mission is to foster a trusted cloud ecosystem by offering essential resources and guidance to both cloud service providers and consumers. The organization's global headquarters are located in Seattle, Washington, USA.

CSA delivers its mission through a range of services including research, education, certification programs, and industry events. Key offerings include frameworks such as the Cloud Controls Matrix (CCM), the Security, Trust & Assurance Registry (STAR), and the AI Controls Matrix (AICM), which help organizations assess and improve their security posture. It also provides certifications like the Certificate of Cloud Security Knowledge (CCSK) and the Trusted AI Safety Expert (TAISE) Certificate, aimed at validating expertise in cloud and AI security.

In recent developments, CSA has significantly expanded its focus on artificial intelligence security, launching the CSAI Foundation and announcing milestones to secure the "agentic control plane." This includes a new catastrophic risk initiative and becoming a CVE Numbering Authority. Recent surveys conducted by CSA highlight prevalent challenges, such as 82% of enterprises having unknown AI agents in their environments, leading to frequent security incidents. The organization is led by a team including Co-Founder and CEO Jim Reavis and President Illena Armstrong, maintaining its position as a leading authority in AI, cloud, and Zero Trust cybersecurity education.

Latest updates

CSAI Foundation Bolsters Agentic AI Security with New Standards, Acquisitions

  • The CSAI Foundation, a subsidiary of the Cloud Security Alliance, launched a 'STAR for AI Catastrophic Risk Annex' to address AI safety concerns, with a four-phase rollout beginning June 2026.
  • The CSAI Foundation was authorized as a CVE Numbering Authority (CNA) by MITRE, initially focusing on its own software tools.
  • The CSAI Foundation acquired the 'Autonomous Action Runtime Management' (AARM) specification from Vanta and the 'Agentic Trust Framework' (ATF) from MassiveScale.AI.
  • Herman Errico will lead the AARM specification development, while Josh Woodruff will continue to lead ATF development.

The CSAI Foundation's moves reflect the growing urgency around securing agentic AI, a space experiencing rapid innovation and adoption. The acquisition of AARM and ATF, coupled with CNA authorization, signals a shift towards proactive vulnerability management and standardized governance in a sector increasingly concerned with catastrophic AI risk. This initiative aims to provide a framework for enterprises to confidently deploy agentic AI while mitigating potentially severe societal consequences.

Governance Dynamics
The alignment of CSAI's initiatives with NIST, EU AI Act, and ISO standards will dictate its influence on emerging AI governance frameworks, and whether it can become a de facto standard.
Regulatory Headwinds
The effectiveness of the Catastrophic Risk Annex will be judged by regulators, and its adoption will be influenced by the broader regulatory landscape surrounding AI safety and control.
Execution Risk
The four-phase rollout of the Catastrophic Risk Annex presents execution risk; delays or shortcomings could undermine CSAI's credibility and slow adoption of its standards.

AI Agent 'Retirement Debt' Threatens Enterprise Security, CSA Survey Finds

  • A Cloud Security Alliance (CSA) survey found 82% of enterprises have unknown AI agents running in their IT infrastructure.
  • 65% of organizations experienced AI agent-related incidents in the past 12 months, resulting in data exposure, operational disruption, and financial losses.
  • Only 21% of respondents have formal AI agent decommissioning processes in place, leading to 'retirement debt' and long-term risk.
  • Despite 68% reporting high visibility, 82% have discovered previously unknown AI agents in the past year, primarily in automation and LLM environments.

The CSA survey highlights a critical blind spot in enterprise security: the proliferation of uncontrolled AI agents. This 'retirement debt' represents a growing structural risk, as organizations increasingly rely on autonomous systems without adequate lifecycle management. The findings underscore a broader trend of AI outpacing existing security controls and necessitate a fundamental shift towards intent-based security models.

Governance Dynamics
The shift from discovery to managing AI agent behavior at scale will require significant investment in automated policy enforcement and continuous monitoring, potentially straining existing security budgets.
Regulatory Headwinds
Increased awareness of AI agent risk will likely accelerate regulatory scrutiny and mandate stricter governance frameworks, impacting deployment flexibility and increasing compliance costs.
Execution Risk
The disconnect between perceived visibility (68%) and actual agent discovery (82%) suggests a systemic failure in current security practices, and remediation efforts may prove more complex and costly than initially anticipated.

AI Agent Security Lags, 53% of Firms Report Scope Violations

  • A Cloud Security Alliance (CSA) study, commissioned by Zenity, found 53% of organizations have experienced AI agent scope violations.
  • Nearly half (47%) of surveyed organizations reported a security incident involving an AI agent in the past year.
  • Detection and response times for AI agent incidents average hours to days.
  • 43% of organizations report that more than half of employees regularly use AI agents, spanning IT, security, customer service, and engineering.
  • Only 13% of respondents feel highly prepared for upcoming AI-related regulations.

The CSA study highlights a critical misalignment between the rapid adoption of AI agents within enterprises and the maturity of security and governance frameworks. This gap exposes organizations to escalating risks, including data breaches, compliance violations, and operational disruptions. The findings underscore a broader trend of AI deployment outpacing the development of necessary controls, potentially creating a significant drag on enterprise digital transformation initiatives.

Governance Dynamics
The discrepancy between documented governance policies (50%) and formal adoption (31%) suggests a significant gap between intent and action, which will likely increase operational risk.
Regulatory Headwinds
The low preparedness (13%) for AI-related regulations indicates potential for significant compliance costs and legal challenges as frameworks solidify.
Execution Risk
The reliance on legacy security models, unable to monitor autonomous agent actions, will hinder effective incident response and necessitate a fundamental shift in security architecture.

Unstructured Data Security Gap Widens as AI Adoption Accelerates

  • A Cloud Security Alliance (CSA) survey, commissioned by Thales, found that 68% of organizations have less than 80% of their unstructured data protected.
  • Organizations report unstructured data accounts for approximately 33% of enterprise data, and nearly a third state it accounts for over half of annual data growth.
  • 75% of organizations express confidence in their unstructured data security, despite the significant protection gap.
  • Nearly one-third of organizations use 11 or more tools to manage unstructured data, indicating tool sprawl and complexity.

The CSA report underscores a growing chasm between the rapid expansion of unstructured data and the ability of organizations to secure it. This trend is being exacerbated by the increasing adoption of AI, which both creates new security risks and is expected to be a key tool for mitigation. The widespread use of multiple, fragmented security tools highlights the complexity and lack of standardization in managing unstructured data, potentially leading to increased operational costs and compliance challenges.

Governance Dynamics
The disconnect between perceived security confidence and actual protection levels suggests a systemic issue with risk assessment and governance frameworks that will require remediation.
AI Integration
The reliance on AI for both threat detection and data classification creates a potential feedback loop where existing security gaps are amplified, necessitating a focus on foundational security controls.
Scalability Challenges
The proliferation of tools and manual processes will increasingly hinder the ability of organizations to scale their unstructured data security programs, requiring consolidation and automation.

AI Agent Access Lapses Threaten Enterprise Security

  • A Cloud Security Alliance (CSA) study found that 68% of organizations cannot distinguish between human and AI agent activity.
  • 73% of organizations expect AI agents to be vital within the next year.
  • 85% of organizations are using AI agents in production environments.
  • 74% of organizations report AI agents often receive more access than necessary.
  • Responsibility for AI agent identity and access is fragmented across departments, with only 9% identifying IAM teams as the primary owner.

The rapid adoption of AI agents is outpacing the ability of organizations to manage their access and identity, creating a significant and growing security risk. This disconnect highlights a fundamental flaw in existing IAM models, which were not designed to handle the complexities of autonomous AI systems. The findings suggest a need for a paradigm shift in how organizations approach identity and access management, moving beyond reactive containment to proactive, identity-centric controls.

Governance Dynamics
The lack of centralized ownership for AI agent access will likely exacerbate security risks and slow down remediation efforts as AI deployments scale.
Regulatory Headwinds
Increased scrutiny from regulators regarding AI governance and data security will force organizations to prioritize identity and access controls for AI agents.
Execution Risk
The reliance on governance mechanisms like token revocation as a primary containment strategy highlights a lack of robust, real-time access enforcement, increasing the potential for significant data breaches.

Cloud Security Alliance Launches Foundation to Secure Agentic AI Ecosystems

  • The Cloud Security Alliance (CSA) launched CSAI, a new 501(c)3 non-profit foundation, at RSAC 2026.
  • CSAI's mission is to “Secure the Agentic Control Plane” – governing identity, authorization, and trust for autonomous AI agent ecosystems.
  • CSAI’s programs include an AI Risk Observatory, Agentic Best Practices guidance, and a TAISE certification expansion.
  • CSAI has partnered with the Coalition for Secure AI (CoSAI) and will contribute to its Technical Steering Committee.

The emergence of CSAI signals a growing recognition of the unique security challenges posed by the shift towards autonomous AI agents. As enterprises increasingly deploy agentic AI, the focus is shifting from securing AI models to securing the entire ecosystem, requiring a new layer of trust infrastructure. This move by CSA reflects a broader trend of specialized organizations emerging to address the complex governance and security needs of the AI era.

Governance Dynamics
The success of CSAI hinges on its ability to establish and enforce industry-wide standards for agentic AI security, potentially facing resistance from organizations prioritizing speed over security.
Regulatory Headwinds
Increased regulatory scrutiny of AI, particularly concerning autonomous agents, could force CSAI to adapt its programs and certifications to align with evolving legal frameworks.
Execution Risk
CSAI’s ambitious six-program strategy, including initiatives like TAISE-Agent Certification, presents significant operational and logistical challenges that could impact its overall effectiveness.

CSA's AI Controls Matrix Wins Recognition Amidst Regulatory Scrutiny

  • The Cloud Security Alliance's (CSA) AI Controls Matrix (AICM) received the 2026 CSO Awards.
  • AICM is a vendor-agnostic framework for secure AI development and implementation, building on CSA’s Cloud Controls Matrix (CCM).
  • The AICM serves as the foundation for STAR for AI, a blueprint for securing generative AI systems.
  • The award recognizes CSA's efforts to bridge the gap between ethical AI guidelines and practical implementation.
  • The award comes as numerous international AI regulations and standards are set to take effect.

The CSO Awards recognition underscores the growing importance of AI security governance as generative AI becomes more pervasive. The AICM's development reflects a broader shift towards proactive risk management and compliance within the AI ecosystem, driven by increasing regulatory scrutiny and stakeholder concerns. This framework's success will be a bellwether for how the industry navigates the complex intersection of innovation and responsible AI deployment.

Regulatory Headwinds
The AICM's value will be increasingly tied to its ability to facilitate compliance with the growing number of international AI regulations, creating a potential bottleneck for AI adoption if not consistently updated.
Implementation Risk
While the AICM provides a framework, its practical adoption across diverse AI value chains—from model providers to application developers—will be critical to its overall impact and may reveal unforeseen challenges.
Competitive Landscape
The emergence of other AI security frameworks and standards could dilute the AICM’s influence and necessitate CSA to continually demonstrate its unique value proposition and maintain industry leadership.

AI Agent Security Gap Threatens Compliance, Fuels Identity Overhaul

  • A Cloud Security Alliance (CSA) survey, commissioned by Strata Identity, found 84% of organizations doubt they can pass a compliance audit focused on AI agent behavior.
  • 70% of organizations expect to manage dozens to hundreds of AI agents within the next 12 months, indicating rapid adoption.
  • Only 18% of respondents are highly confident their current Identity and Access Management (IAM) systems can handle agent identities.
  • 44% of organizations are using or plan to use static API keys for agent access, a significant security risk.
  • 40% of organizations are increasing identity and security budgets to accommodate AI agents, with 34% allocating a dedicated budget line.

The survey highlights a critical disconnect between the rapid adoption of AI agents and the ability of existing identity and access management frameworks to secure them. This 'time-to-trust' phase represents a significant challenge for enterprises, potentially delaying broader AI adoption and creating new compliance risks. The increased investment in identity and security signals a recognition of this gap, but the reliance on static credentials and fragmented controls suggests a reactive rather than proactive approach.

Governance Dynamics
How the lack of real-time agent inventory and traceability will impact regulatory scrutiny and compliance efforts as AI agent deployments scale.
Architectural Shifts
Whether organizations will move beyond extending existing IAM models to embrace fundamentally new identity architectures designed for agentic systems, or if legacy approaches will continue to create vulnerabilities.
Investment Trajectory
The pace at which dedicated AI agent security budgets will grow relative to overall security spending, and whether this will drive consolidation or innovation within the identity management vendor landscape.

AI Identity Security Lags, Exposing Enterprises to Growing Risk

  • A Cloud Security Alliance (CSA) survey, commissioned by Oasis Security, found that 79% of IT professionals feel ill-equipped to prevent attacks via non-human identities (NHIs).
  • The survey revealed that 78% of organizations lack documented policies for AI identity creation and removal, and 92% lack confidence in their legacy IAM solutions to manage AI-related risks.
  • Manual processes dominate identity lifecycle management, with only 14% fully automating creation and removal, leading to slow remediation times (24% taking over 24 hours to rotate credentials).
  • The survey, conducted in August/September 2025, included responses from 383 IT and security professionals across various organizations.

The rapid proliferation of AI agents and automated workflows is creating a massive explosion in identity creation and access, far outpacing the ability of traditional identity management systems to keep pace. This gap represents a significant and growing attack surface for enterprises, and the lack of governance and automation is creating a bottleneck for AI adoption. The findings underscore a systemic vulnerability as organizations increasingly rely on AI for core business functions.

Governance Dynamics
The lack of formalized AI identity governance will likely force a rapid shift towards policy-as-code and automated enforcement, potentially creating a market for specialized governance tooling.
Legacy Systems
The widespread inadequacy of legacy IAM systems to handle AI identities will accelerate the migration to cloud-native identity platforms, putting pressure on vendors to offer AI-specific capabilities.
Operational Strain
The slow remediation times highlighted in the report suggest a significant operational burden, which will likely drive demand for automated credential lifecycle management solutions and increased investment in security operations.
CID: 3886