AI vs. AI: MEXT Tackles Soaring Memory Costs With New Software

📊 Key Data
  • 50% reduction in infrastructure costs with Predictive Memory™
  • 2-4x expansion in usable memory capacity without hardware changes
  • $3.99 per gigabyte annually subscription model for memory scaling
🎯 Expert Consensus

Experts view MEXT’s Predictive Memory™ as a critical software solution to the growing DRAM scarcity crisis, offering immediate cost savings and performance improvements without requiring hardware upgrades.

1 day ago

AI vs. AI: MEXT Tackles Soaring Memory Costs With New Software

SANTA CLARA, CA – April 07, 2026 – In an ironic twist, the artificial intelligence boom that is straining global computing infrastructure may have also spawned its own solution. As AI workloads drive an insatiable demand for memory, a three-year-old startup, MEXT, has emerged from stealth with a software-only solution, Predictive Memory™, that uses AI to combat the very infrastructure crisis AI has created.

The company announced today a technology that promises to cut infrastructure costs by up to 50% and expand usable memory capacity by 2-4x without requiring a single hardware change. The announcement comes as data centers grapple with DRAM (Dynamic Random-Access Memory) becoming the scarcest and most expensive component in a server, a bottleneck that threatens to slow the pace of innovation.

“Memory is an increasingly critical component of modern compute infrastructure,” said Robert Hormuth, Corporate Vice President of Architecture and Strategy at AMD, in a statement. “As AI and data-intensive workloads continue to grow, finding new approaches to memory scalability is becoming an industry imperative. We have been working closely with MEXT and see their technology as a strong answer to this imperative.”

The Soaring Price of Progress

The root of the problem is a half-century old. While processors and storage have undergone revolutionary changes, the fundamental architecture of DRAM has remained largely static since its introduction in 1970. This stagnation has become a critical issue as the memory requirements for large language models (LLMs) and other data-intensive applications have exploded.

“MEXT was founded three years ago around a simple conviction: DRAM costs would become the defining constraint on computing, already accounting for half the cost of a server,” said Gary Smerdon, Founder and CEO of MEXT. He noted that with prices surging 3.5x in the past two quarters alone, the industry is hitting a breaking point.

This “memory crisis” is compounded by a paradox of inefficiency. While companies spend billions on stocking servers with expensive DRAM, studies from major hyperscalers like Meta, Microsoft, and Google have highlighted that this crucial resource is often severely underutilized. In many enterprise environments, as much as 50% of DRAM sits idle, holding inactive or “cold” data. This forces organizations into a cycle of overprovisioning, buying more memory than they actively use just to handle peak loads and avoid system crashes.

Software to Solve a Hardware Problem

MEXT’s Predictive Memory™ attacks this problem by fundamentally changing the relationship between memory and storage. The software transparently brings low-cost, high-capacity flash storage into the memory domain, creating a new, intelligent tier.

The process is elegantly simple in concept but complex in execution. The software first identifies memory pages that are not in active use and offloads them to a flash drive, which can be 50 times cheaper than DRAM. The core innovation lies in its patent-pending AI engine. Much like a language model predicts the next word in a sentence, MEXT’s engine analyzes an application's memory access patterns to predict which offloaded data will be needed next. It then proactively moves those specific memory pages from flash back into DRAM before the application even requests them.

This predictive pre-fetching is the key to overcoming the inherent latency of flash storage. By ensuring data is already in high-speed DRAM when called upon, the application experiences little to no performance impact. The result, MEXT claims, is a new price-performance tier that delivers the speed of DRAM with the economics and massive capacity of flash. The entire process, from training to inference, runs on a single CPU core and requires no specialized GPU hardware. Crucially for IT departments, the solution is designed for immediate impact, installing in under five minutes without any changes to the existing operating system, applications, or hardware.

Redefining the Economics of Computing

By decoupling memory capacity from expensive DRAM, MEXT is positioning itself as a critical tool for CIOs and CFOs struggling to balance innovation with budget constraints. The technology offers a compelling alternative to simply buying more hardware, a strategy that is becoming financially unsustainable.

While emerging hardware standards like Compute Express Link (CXL) promise a more flexible, composable future for data center memory, they require next-generation hardware that most enterprises have yet to adopt. MEXT’s software-only approach provides an immediate solution for the vast installed base of existing servers, both on-premises and in the cloud.

This approach effectively turns a hardware limitation into a software optimization problem. The company’s subscription model, priced at $3.99 per gigabyte annually, allows organizations to scale their memory capacity in a more granular and cost-effective manner. For IT leaders, this shifts a significant portion of capital expenditure on hardware to a more predictable operating expense for software, directly impacting the total cost of ownership (TCO).

From Hollywood Render Farms to Chip Design

The technology is already proving its worth in some of the world's most demanding computing environments. In the media and entertainment industry, a premier Hollywood studio is using Predictive Memory™ to accelerate animation and special effects rendering. A company whitepaper details how DreamWorks' open-source MoonRay renderer, running on a system with MEXT, achieved near-identical performance with half the physical DRAM compared to a control system. This allows artists to work on larger, more complex scenes without waiting for hardware upgrades.

Further tests with SideFX Houdini, another industry-standard tool, showed MEXT delivering up to a 4.4X performance improvement on memory-constrained systems by preventing the system from resorting to slow, traditional disk swapping. For render-heavy workflows, the ability to avoid Out-of-Memory (OOM) errors is a game-changer, turning previously impossible jobs into manageable tasks.

This impact extends to the semiconductor industry, where one of the world's leading chip manufacturers is using the software to accelerate complex Electronic Design Automation (EDA) workloads. By expanding the effective memory of their systems, engineers can run larger, more intricate simulations, shortening chip design cycles. The gaming industry is also leveraging the technology to build richer, more immersive worlds without a corresponding explosion in infrastructure spending. Available today, MEXT is offering organizations a chance to validate these performance claims in their own environments with a proof-of-concept that can be deployed in minutes, a bold claim for a technology aiming to solve one of computing's most persistent problems.

Theme: Digital Transformation Generative AI Machine Learning
Sector: AI & Machine Learning Fintech Software & SaaS
Product: ChatGPT
Metric: EBITDA Revenue
Event: Corporate Finance

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 24640