AI's New Frontier: SK hynix Backs Memory-Centric Chipmaker Semidynamics
- €45 million in previous non-dilutive funding for Semidynamics from European and Spanish innovation programs
- 3nm process node achieved in Semidynamics' first silicon tape-out
- Memory-centric architecture designed to reduce 'cost per token' in AI workloads
Experts agree that the future of AI hardware lies in optimizing memory architecture to overcome the 'memory wall,' and this partnership positions Semidynamics and SK hynix as leaders in solving this critical bottleneck.
Semidynamics and SK hynix Bet on Memory to Solve AI's Cost Crisis
BARCELONA, Spain – April 08, 2026 – In a move that signals a pivotal shift in the artificial intelligence hardware race, memory manufacturing giant SK hynix has made a strategic investment in Barcelona-based Semidynamics. The partnership underscores a growing conviction within the industry: the future economics of AI will be defined not by raw computational power, but by the efficiency of memory architecture.
The investment unites one of the world's leading memory producers with an advanced computing company whose entire philosophy is built around solving the data bottlenecks that plague modern AI systems. As large language models (LLMs) and agentic AI become more complex, their performance is increasingly constrained by the time and energy spent moving data, a challenge known as the "memory wall." This collaboration aims to dismantle that wall, focusing on the metric that matters most to data centers and enterprises deploying AI at scale: the cost per token.
Tackling the 'Memory Wall'
For years, the pursuit of AI dominance has been a race for faster processors and more compute cores. However, a fundamental limitation has emerged. Processors can often sit idle, waiting for data to arrive from memory. This disparity between processing speed and data access speed—the memory wall—creates a significant performance bottleneck, particularly for the memory-intensive workloads of next-generation AI.
These workloads, which include multi-turn conversational agents and complex reasoning systems, rely on maintaining vast amounts of context, such as the history of a conversation or intermediate calculations known as a KV-cache. As the context grows, so does the demand on memory capacity and bandwidth. This directly impacts the "cost per token," a key economic indicator measuring the expense of processing each unit of information. Inefficient data movement leads to higher latency, lower user density per server, and ultimately, a higher cost to generate an answer or perform a task.
"SK hynix's investment is a direct reflection of where AI infrastructure is heading, systems where memory architecture is as strategically important as compute," said Roger Espasa, Founder and CEO of Semidynamics. "We built Semidynamics around that thesis, and this partnership strengthens our position as we bring our inference platform to market at a moment when the industry has recognized that token economics are a memory problem as much as a compute problem."
A New Architecture for AI Inference
Unlike many competitors who retrofit existing compute-centric designs to better handle memory, Semidynamics has engineered its architecture from first principles around this challenge. The company has developed a proprietary implementation of the open-standard RISC-V instruction set, designing it specifically to overcome data movement constraints.
At the heart of its design is the Gazzillion® memory subsystem, a proprietary technology engineered for latency tolerance. Instead of simply trying to brute-force data through wider pipes, Gazzillion® is designed to keep the processor's compute units productive even during the long waits associated with memory access. This design philosophy is embedded throughout the chip, from the core and tensor units to the memory subsystem itself, enabling the system to handle massive datasets with greater efficiency.
This memory-centric approach allows Semidynamics' processors to support multiples of the memory capacity found in conventional AI systems. This enables the deployment of larger, more capable models, the maintenance of extensive KV-caches for long-context conversations, and ultimately, the ability to serve more users per rack. This directly translates to a lower cost per token.
The strategic alignment was echoed by SK hynix. "AI workloads are fundamentally memory-bound problems, and the industry has been underinvesting in architecture-level solutions," stated Heejin Chung, SVP, Head of Venture Investment at SK hynix America. "Semidynamics is one of the few companies that has built from first principles around this constraint."
From Design to Silicon: A European Milestone
The investment comes on the heels of a major technical achievement for the Barcelona-based firm. Semidynamics recently completed its first silicon tape-out on TSMC's cutting-edge 3nm process node. This milestone is not only a validation of the company's complex design but also a significant accomplishment for a European semiconductor company, placing it at the forefront of global chip innovation.
Achieving a 3nm tape-out demonstrates a high level of engineering maturity and positions Semidynamics to deliver chips with the performance and power efficiency necessary to compete at the highest level of the AI hardware market. This progress has been supported by €45 million in previous non-dilutive funding from various European and Spanish innovation programs, highlighting a regional commitment to fostering semiconductor sovereignty.
Furthermore, Semidynamics is pursuing an ambitious full-stack strategy. The company is not just designing chips but is building a vertically integrated platform encompassing boards and complete rack-level systems. This approach allows for deep co-optimization of hardware and software, offering customers a turnkey solution that simplifies deployment and maximizes performance for large-scale data center inference. The new funding from SK hynix will directly support future tape-outs and the buildout of this rack-level platform.
Reshaping the Competitive AI Hardware Landscape
The partnership between Semidynamics and SK hynix enters a highly competitive market dominated by giants like NVIDIA, with major players like AMD and Intel also vying for position. While these companies are increasingly integrating high-bandwidth memory (HBM) into their own powerful GPUs, the Semidynamics-SK hynix collaboration represents a different strategic approach: a deep, architectural alignment between a memory specialist and a memory-centric chip designer.
By co-optimizing Semidynamics' unique RISC-V architecture with SK hynix's next-generation memory technologies, the two companies aim to create a solution that is more than the sum of its parts. This synergy could provide a compelling alternative for data center operators and cloud providers struggling with the cost and complexity of deploying today's most demanding AI models. As the AI industry continues its relentless scaling, this focus on solving the memory problem at its core may well determine the next generation of leaders in AI infrastructure.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →