Arrcus and Lightstorm Forge AI Network to Power Asia-Pacific's Boom

📊 Key Data
  • $192 billion: The global AI in networks market is projected to reach this value by 2034, up from $11.5 billion in 2024, with Asia-Pacific leading the growth.
  • $110 billion: AI and generative AI investments in the Asia-Pacific region are expected to hit this figure by 2028.
  • Minutes vs. weeks: Lightstorm's Polarin NaaS platform enables high-bandwidth data center interconnects to be established in minutes rather than weeks.
🎯 Expert Consensus

Experts agree that the Arrcus-Lightstorm partnership addresses a critical need in the AI infrastructure market by providing a purpose-built, intelligent network solution that can handle the unique demands of distributed AI workloads, particularly in the Asia-Pacific region.

about 2 months ago
Arrcus and Lightstorm Forge AI Network to Power Asia-Pacific's Boom

Arrcus and Lightstorm Forge AI Network to Power Asia-Pacific's Boom

By Timothy Bell

BARCELONA, Spain – March 03, 2026 – As the artificial intelligence arms race intensifies across Asia, the focus is shifting from raw computing power to the intricate web that connects it. Addressing this critical need, networking software leader Arrcus and Asia-Pacific digital infrastructure provider Lightstorm have announced a strategic partnership to deliver a purpose-built network fabric designed specifically for the demanding workloads of distributed AI.

The collaboration, unveiled here, combines Arrcus’s advanced routing software with Lightstorm’s expansive Polarin Network-as-a-Service (NaaS) platform. The integrated solution aims to create a more intelligent, automated, and responsive network layer that can handle the unique demands of AI training and inference, positioning the pair to capture a significant share of the region's burgeoning AI infrastructure market.

The New Bottleneck: Why AI Needs a Smarter Network

The explosive growth of AI has created a new class of applications that place unprecedented strain on traditional network architectures. Large-scale AI training models often involve thousands of GPUs working in concert across multiple data centers, requiring a network that can provide lossless, low-latency, and high-bandwidth connectivity to prevent costly processing units from sitting idle. Similarly, real-time AI inference, which powers everything from recommendation engines to autonomous systems, demands consistent, ultra-low latency to deliver immediate responses.

Research indicates that the global market for AI in networks is projected to skyrocket from an estimated $11.5 billion in 2024 to over $192 billion by 2034, with the Asia-Pacific region leading this growth. This surge is fueled by the realization that conventional networking, even at high speeds, is often the bottleneck that throttles AI performance. The challenge is no longer just about moving data quickly, but about moving it intelligently, with an awareness of the workload's specific needs.

"AI infrastructure requires the network to behave like part of the compute fabric," said Amajit Gupta, Group CEO & MD of Lightstorm, in the official announcement. This sentiment reflects a fundamental shift in infrastructure design, where the network must be as dynamic and programmable as the compute and storage it supports. The partnership aims to deliver on this vision by creating a cohesive system that can dynamically allocate bandwidth, optimize data paths, and ensure the deterministic performance that distributed AI clusters crave.

A Two-Part Solution for a Complex Problem

The joint solution tackles this complexity by integrating two powerful, complementary platforms. On one side is Arrcus, a San Jose-based company known for its disaggregated networking software. Its Arrcus Connected Edge (ACE) platform, and specifically its ACE-AI™ offering, provides the 'brains' of the operation. This software-defined approach enables intelligent, policy-aware routing that can adapt to the fluctuating demands of AI workloads. It leverages technologies like Remote Direct Memory Access over Converged Ethernet (RoCEv2) and Priority Flow Control (PFC) to create a lossless environment, which is critical for preventing packet drops that can derail massive, parallel processing tasks.

On the other side is Lightstorm's Polarin, a cutting-edge Network-as-a-Service (NaaS) platform built atop an extensive fiber network spanning India and key APAC markets. Polarin has already made waves by automating network provisioning, enabling customers to establish high-bandwidth data center interconnects in minutes rather than weeks. Its API-driven architecture allows for the kind of programmability that the Arrcus-Lightstorm solution promises, letting customers orchestrate connectivity based on business policies and real-time workload requirements.

"Distributed AI workloads—from real-time inference to large-scale training—demand intelligent, programmable networking," added Shekar Ayyar, Chairman & CEO of Arrcus. "Our partnership with Lightstorm enables AI operators across Asia-Pacific to optimize network performance and improve infrastructure efficiency."

By weaving Arrcus's routing intelligence into the fabric of the Polarin NaaS platform, the partners are offering a unified solution that promises to accelerate model training times, improve inference response speed, and simplify the expansion of AI clusters across multiple sites.

Navigating APAC's Data Sovereignty Maze

Beyond pure performance, the partnership strategically addresses one of the most significant hurdles for AI deployment in the Asia-Pacific region: data sovereignty. The APAC regulatory landscape is a complex patchwork of national laws and guidelines. Countries like China, India, and Indonesia are implementing increasingly strict rules governing how and where data is stored and processed, often requiring it to remain within national borders.

This presents a major challenge for organizations looking to build distributed AI systems that span the region. A model might be trained on data in one country but used for inference in another, all while needing to comply with different legal frameworks. The Arrcus-Lightstorm solution is designed to navigate this maze through its policy-driven architecture. By using API-driven orchestration, enterprises can program the network to enforce rules that keep sensitive data within a specific geographic or legal boundary. This allows them to build high-performance, cross-border AI infrastructure that doesn't compromise on compliance, turning a potential obstacle into a competitive advantage.

Reshaping the Competitive Landscape

With AI and generative AI investments in the Asia-Pacific region projected to hit $110 billion by 2028, the strategic timing of this partnership is clear. Arrcus and Lightstorm are positioning themselves not just as suppliers but as enablers of this growth, offering a specialized solution in a market where general-purpose tools are proving inadequate. They face competition from established networking giants like Cisco and Juniper, as well as the formidable, vertically integrated ecosystem of NVIDIA, whose InfiniBand technology is a popular choice for high-performance computing.

However, the partnership's unique blend of intelligent, disaggregated software and an automated, expansive NaaS platform offers a compelling alternative. It promises the flexibility and cost-effectiveness of a software-defined approach combined with the reach and speed of a modern carrier. For enterprises and AI cloud providers in Asia-Pacific, this integrated, purpose-built fabric could significantly lower the barrier to entry for developing and deploying next-generation AI at scale. The joint solution is now available for deployment across Lightstorm’s extensive network, signaling a new phase in the race to build the foundational infrastructure for the region's AI-powered future.

Theme: Geopolitics & Trade Generative AI Machine Learning Artificial Intelligence
Product: AI & Software Platforms
Sector: AI & Machine Learning Fintech Software & SaaS
Event: Partnership
Metric: Revenue
UAID: 19231