Securing Telecom's AI Future: SAS and TM Forum Forge Governance Blueprint
As telecoms race to adopt agentic AI, SAS and TM Forum are building critical security guardrails to prevent systemic risks in our connected world.
Building Trust in Telecom AI: SAS and TM Forum Tackle Governance
CARY, NC – November 25, 2025 – As the telecommunications sector races to integrate artificial intelligence into its core operations, a critical question looms over the industry: Can this powerful technology be trusted? In a move aimed at answering that question, data and AI leader SAS has announced it will co-chair a key workstream within TM Forum's AI-Native Blueprint Project. The collaboration targets the growing gap between rapid AI adoption and the lagging development of security and governance frameworks, a chasm that poses a significant threat to the stability of global communications infrastructure.
This initiative comes as communications service providers (CSPs) increasingly look to AI to manage network complexity and boost operational efficiency. However, with research indicating that a staggering 84% of AI tool providers have experienced security breaches, the rush to deploy AI without robust guardrails is a high-stakes gamble. SAS's commitment to co-chairing the AI Security and Governance Workstream signals a concerted effort to move the industry from ad-hoc experimentation to a standardized, secure, and responsible AI future.
The Agentic AI Double-Edged Sword
At the heart of the telecom industry's transformation is the rise of 'agentic AI'—autonomous systems that can perceive, reason, and act with minimal human oversight. These are not just tools that recommend actions; they are designed to execute them, from self-healing network faults to dynamically optimizing data traffic. As noted by TM Forum EVP, AI and Data Innovation, Guy Lupo, "Agentic AI is rapidly eclipsing every other technology priority for CSPs as they work to modernize, scale, and ultimately trust AI-driven operations." The promise is immense: radically streamlined operations, predictive maintenance that prevents outages, and the ability to manage the overwhelming complexity of 5G, IoT, and edge computing.
However, this autonomy is a double-edged sword. Granting AI agents direct control over critical network functions massively expands the attack surface. A compromised AI could be weaponized to manipulate traffic routing, disable essential services, or expose vast troves of sensitive customer data. The risks extend beyond malicious attacks. Ungoverned AI systems are susceptible to model drift, where performance degrades over time, or inherent bias that can lead to discriminatory service delivery. For an industry that forms the backbone of modern society—supporting everything from emergency services to financial systems—such vulnerabilities are unacceptable. CSPs must therefore "innovate their engineering foundations and build the confidence to safely shift decision-making from humans, to assistants, and eventually to autonomous agentic systems," Lupo added.
Forging an Industry-Wide Blueprint for Trust
Recognizing that isolated efforts are insufficient, the TM Forum's AI-Native Blueprint Project provides the industry's first structured path for this transition. The goal is to move CSPs out of siloed labs and into production environments, enabling the shift from small-scale experimentation to trustworthy and responsible AI at scale. The blueprint is built on three foundational pillars designed to address the most pressing challenges.
First is Data Products Lifecycle Management, which seeks to transform data from siloed projects into discoverable, governed products that AI agents can access reliably and without human intervention. The second, Model as a Service (MODaaS), aims to define how CSPs can source, operate, and scale diverse AI models across cloud, on-premise, and edge environments with consistent metrics and controls.
The third pillar, Security & Governance, is arguably the most critical for building long-term trust. This workstream, co-chaired by SAS, is tasked with establishing security patterns for interactions between AI agents and setting the foundational principles for responsible AI governance. By standardizing these elements, the project aims to create an interoperable and secure ecosystem where CSPs can deploy AI with confidence, knowing it aligns with both regulatory requirements and ethical obligations.
Architecting Governance with Proven Expertise
SAS's role as co-chair of the Security & Governance Workstream is a strategic move that leverages the company's deep background in analytics, risk management, and data ethics. The firm is not new to this challenge, having long provided solutions that help organizations in highly regulated industries implement responsible AI. A recent SAS survey underscores the growing corporate awareness of this need, revealing that 57% of organizations plan to increase their investments in responsible AI, correlating strong governance with superior business performance.
To guide this process, SAS brings frameworks like its AI Governance Map, a tool designed to help organizations assess their governance maturity and create a roadmap for managing risk, transparency, and compliance in AI deployments. This expertise is crucial for turning abstract principles into practical, operational controls. "An inability to trust AI's security foundation can be the downfall of any number of promising AI projects," said Reggie Townsend, Vice President SAS Data Ethics Practice. "Our collaboration with TM Forum ensures that AI adoption delivers measurable business value without compromising integrity." This focus on a robust security foundation is essential for preventing the kind of systemic failures that could erode public trust in both AI and the telecom services that depend on it.
From Framework to Real-World Value
The ultimate measure of the AI-Native Blueprint's success will be its real-world impact. The initiative is not merely a theoretical exercise; it is directly tied to enabling tangible business outcomes. SAS points to its existing work with telecom clients as a proof point for what well-governed AI can achieve. By using automated decision-making, CSPs have reportedly optimized network investments, reducing rollout times by 40% and achieving payback in as little as seven months. In other use cases, AI agents have improved call center productivity, cutting complaint handling time by up to 40% while increasing the volume of cases handled.
These results demonstrate that responsible AI is not a barrier to innovation but a critical enabler of sustainable growth and operational excellence. The first version of the AI-Native Blueprint, incorporating the findings from the Security & Governance workstream, is set to be unveiled at TM Forum's Digital Transformation World Ignite event in Copenhagen in June 2025. This will provide the global telecom industry with its first comprehensive look at a standardized, secure path forward, helping to ensure the next wave of AI-driven innovation is built on a foundation of trust.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →