Mirantis and VAST Data Partner to Boost AI Factory Efficiency
- Market Growth: The neocloud market is projected to surge from $16 billion in 2024 to $1.1 trillion by 2034. - GPU Utilization: The partnership aims to maximize GPU efficiency, addressing a critical bottleneck in AI infrastructure.
Experts view this partnership as a strategic move to standardize AI infrastructure, reducing complexity and accelerating deployment for neoclouds and enterprises.
Mirantis and VAST Data Partner to Boost AI Factory Efficiency
CAMPBELL, Calif. – February 25, 2026 – In a move aimed at streamlining the complex world of artificial intelligence infrastructure, Kubernetes specialist Mirantis announced it has joined VAST Data’s Cosmos Partner Program as an inaugural Technology Partner. The collaboration is set to create a standardized and repeatable blueprint for deploying high-performance AI systems, tackling one of the most significant economic challenges in the industry: ensuring expensive Graphics Processing Units (GPUs) are not left idle.
The partnership will integrate VAST Data’s AI Operating System with the Mirantis k0rdent AI ecosystem. This combination promises to help a rapidly growing class of specialized cloud providers, known as ‘neoclouds,’ orchestrate and scale their AI factories more efficiently, reducing integration headaches and accelerating the delivery of AI services.
The Neocloud Rush and the GPU Bottleneck
The AI revolution has ignited a gold rush for computational power, giving rise to neoclouds—a new breed of cloud provider purpose-built for the extreme demands of AI and high-performance computing. This market is experiencing explosive growth, with some analysts projecting it to surge from just over $16 billion in 2024 to more than $1.1 trillion by 2034. These providers specialize in offering GPU-as-a-Service, making powerful AI hardware accessible to enterprises without the massive upfront capital investment.
However, building and operating these AI factories is fraught with challenges. The most critical is maximizing the utilization of GPUs, which are not only costly but often in short supply. A high-end GPU can be rendered ineffective if it is constantly waiting for data, a common problem known as a data bottleneck. The performance of modern AI infrastructure increasingly depends on eliminating these bottlenecks that can leave expensive hardware underutilized.
“Neoclouds are under constant pressure to get more from every GPU hour,” said Kevin Kamel, vice president of product management at Mirantis, in the official announcement. This pressure is the driving force behind the new partnership, which seeks to solve the data throughput problem at an architectural level.
Beyond Bespoke: The Rise of Standardized AI Stacks
For years, building large-scale AI infrastructure was an artisanal process, requiring bespoke integrations of hardware and software from multiple vendors. This approach is slow, expensive, and difficult to scale. In response, the industry is rapidly shifting towards standardized, pre-validated building blocks—a trend heavily influenced by ecosystem leaders like NVIDIA.
NVIDIA has been championing the concept of the “AI Factory” through its reference architectures, which provide blueprints for how compute, networking, and storage should be composed for large-scale GPU environments. The collaboration between Mirantis and VAST Data aligns directly with this movement.
“AI infrastructure leaders are moving toward validated building blocks instead of bespoke stacks,” noted John Mao, vice president of Global Technology Alliances at VAST Data. “In joining VAST’s Cosmos Community, Mirantis brings orchestration expertise that helps operators operationalize the VAST AI Operating System in real-world AI factories – accelerating deployment, reducing integration burden, and improving the path to performance at scale.”
By creating a standardized integration path, the two companies aim to provide a commercially supported, enterprise-grade solution that simplifies one of the most complex parts of the AI puzzle.
Orchestration Meets Data: A Technical Synergy
The collaboration combines two powerful, complementary technologies. Mirantis k0rdent AI provides a flexible and scalable framework for engineering and operating Kubernetes multi-cluster environments. It acts as the orchestration layer, managing the compute resources where AI models are trained and run. Its open and composable design allows it to manage infrastructure across different hardware vendors, providing crucial flexibility for neoclouds.
On the other side, the VAST AI Operating System unifies storage, database, and data processing functions into a single, high-performance platform designed to feed data-hungry GPUs. The system is engineered to eliminate the traditional separation between data services and compute, running directly on GPU-accelerated servers to minimize latency and maximize throughput. This unified approach supports the entire AI workflow, from data ingestion and training to high-speed inference.
By integrating these two platforms, the partnership creates a cohesive stack where the orchestration of workloads (Mirantis) is tightly coupled with the high-performance data delivery system (VAST). This synergy is designed to allow neocloud operators to scale their deployments using repeatable patterns, moving away from one-off integration projects and focusing instead on delivering AI services.
Accelerating the Path to AI Services
The ultimate goal of this partnership is to accelerate the time-to-market for AI infrastructure and the services built upon it. For neoclouds and enterprises building their own AI capabilities, the benefits are clear: reduced complexity, faster deployment, and a better return on their substantial hardware investments.
The joint solution specifically targets key outcomes, including maximizing GPU utilization by crushing storage bottlenecks, enabling scalable deployments through repeatable patterns, and speeding up service delivery through Kubernetes-native automation. “By working with VAST Data through the Cosmos Community, we’re building a repeatable, standards-based path to bring high-performance data services into the k0rdent AI ecosystem, helping operators scale faster, simplify operations, and focus on outcomes instead of one-off integration work,” Kamel explained.
This move reflects a maturing AI market where operational efficiency and speed are becoming just as important as raw performance. As more organizations seek to deploy AI at scale, the demand for pre-integrated, validated infrastructure solutions that remove complexity from the equation is only set to grow.
