SonicEdge's Ultrasound Tech Aims to Reshape Audio in AI Wearables

SonicEdge's Ultrasound Tech Aims to Reshape Audio in AI Wearables

At CES 2026, a new partnership to embed modulated ultrasound in chips could make our hearables and smart glasses smaller, smarter, and more private.

2 days ago

SonicEdge's Ultrasound Tech Aims to Reshape Audio in AI Wearables

LAS VEGAS, NV – January 06, 2026 – A quiet revolution in personal audio is gaining volume at CES 2026. Israeli micro-acoustic innovator SonicEdge has announced a strategic partnership with an unnamed "leading global semiconductor manufacturer" to embed its modulated ultrasound speaker technology directly into next-generation audio chips. This move signals a significant industry bet that the future of sound in our smallest devices won't come from traditional speakers, but from silent, focused beams of ultrasound.

The partnership aims to create audio chipsets that can natively drive SonicEdge's miniature speakers, a development poised to accelerate the adoption of a technology that has long promised to solve the core challenges of audio in space-constrained, AI-powered devices like hearables and smart glasses. By moving the technology from a discrete component to an integrated part of the silicon, the collaboration seeks to tear down integration barriers for device manufacturers and establish a new benchmark for audio performance.

The Silent Revolution: What is Modulated Ultrasound?

For over a century, speakers have operated on a similar principle: a diaphragm vibrates to create pressure waves in the air at audible frequencies. While technology has shrunk these components dramatically, they are still bound by physical limitations, especially when it comes to delivering rich sound from a tiny package. Modulated ultrasound throws this paradigm out the window.

Instead of creating audible sound directly, the technology uses ultrasonic waves—frequencies far above the range of human hearing—as a carrier. An audio signal is "modulated" onto this ultrasonic carrier wave. These waves are then emitted from a tiny, silicon-based transducer. The magic happens in the air itself; the non-linear properties of air cause the ultrasonic waves to self-demodulate, effectively recreating the original audible sound at a distance from the speaker.

The primary benefit of this approach is extreme directionality. Unlike a conventional speaker that broadcasts sound in all directions, modulated ultrasound creates a narrow, highly focused beam of audio. This enables "personal sound zones," allowing a user to listen to music, take a call, or interact with an AI assistant in privacy, without disturbing those nearby and without needing to block their ears with an earbud.

SonicEdge's innovation lies in its ability to produce these ultrasonic transducers on silicon, using the same reliable and scalable manufacturing processes behind modern computer chips. This allows for extreme miniaturization and continuous improvement. The company claims its technology is on a path to double its sound output from the same tiny footprint every two years, a Moore's Law-like progression for acoustics. This stands in contrast to traditional MEMS (Micro-Electro-Mechanical Systems) speakers, which, while also silicon-based and tiny, still generate sound via a vibrating membrane and lack the inherent directionality of ultrasound.

A Strategic Alliance to Reshape Hearables

The announcement of a partnership with a major, albeit anonymous, chipmaker is perhaps the most significant part of SonicEdge's CES reveal. It represents a powerful vote of confidence from a key gatekeeper in the consumer electronics industry. By embedding the modulated ultrasound intellectual property (IP) directly at the silicon level, the technology transitions from a niche component to a feature that could become standard in mainstream audio SoCs (System-on-Chips).

For Original Equipment Manufacturers (OEMs)—the companies that design and build our gadgets—this integration is a game-changer. It dramatically simplifies the complex engineering required to incorporate a novel audio system, reducing development costs, shrinking the bill of materials, and accelerating time-to-market. Instead of sourcing and integrating separate drivers and control electronics, product designers can work with a single chip that has native support built-in.

"Chip manufacturers today understand that AI-enabled hearables demand a fundamentally different acoustic architecture," said Dr. Moti Margalit, CEO and Co-founder of SonicEdge, in the company's official announcement. "Modulated ultrasound delivers the performance, miniaturization, and power efficiency these devices require, and we're seeing rapid industry movement to adopt this technology."

This strategic move mirrors past technological shifts where specialized functions, once handled by separate chips, become integrated into a central processor. It suggests the semiconductor industry sees modulated ultrasound not as a fringe experiment, but as a foundational technology for the next wave of personal computing.

Powering the Next Generation of AI Devices

The partnership arrives at a critical moment. The market for hearables and wearables is exploding, with projections showing the smart hearables segment growing at a CAGR of over 25% to potentially exceed $160 billion by the early 2030s. This growth isn't just about listening to music; it's driven by the integration of artificial intelligence, turning these devices into always-on assistants, health monitors, and real-time translators.

However, this ambition is constrained by physics. Packing more AI processing power, advanced sensors, and all-day battery life into a device that fits in or on an ear is a monumental engineering challenge. SonicEdge's technology directly addresses several key pain points:
* Miniaturization and Form Factor: By enabling high-performance audio from a smaller source, the technology frees up precious internal volume for larger batteries or additional sensors.
* Power Efficiency: Directing sound only where it's needed is inherently more efficient than broadcasting it widely, a critical factor for battery-dependent wearables.
* Audio Privacy: For open-ear devices like smart glasses or certain earbuds, creating a private listening experience without sound leakage has been a major hurdle. Directional audio solves this elegantly.

SonicEdge is already looking beyond just the speaker. The company is showcasing its SonicTwin 100 (ST100) platform, which combines its modulated ultrasound drivers with advanced microphone technology. This integrated solution is designed to enable breakthrough performance in active noise cancellation (ANC) and provide the ultra-low-latency audio processing essential for seamless augmented reality and real-time AI interactions. Backed by a portfolio of over 25 patents, the company is positioning itself not just as a component supplier, but as the provider of the core acoustic engine for the AI-powered era.

Beyond the Earbud: A Glimpse into the Future of Audio

While the immediate focus is on hearables, the implications of mainstream, silicon-integrated modulated ultrasound extend far beyond the earbud. The ability to create discreet, personal sound bubbles could fundamentally change how we interact with technology in shared spaces.

Imagine smart glasses that provide navigational prompts or notifications that only the wearer can hear, without the social awkwardness of bone conduction vibrations or open-ear speakers. In the automotive world, each passenger could enjoy their own audio stream without headphones. In museums or retail stores, exhibits could deliver targeted audio information to individual visitors as they approach.

This technology represents a key enabler for the concept of ambient computing, where technology seamlessly integrates into our environment rather than demanding our focused attention through a screen. As major tech players like Apple, Google, and Samsung push further into this territory with their respective ecosystems of wearables and smart devices, the underlying components that enable new user experiences become immensely valuable.

By securing a partnership to embed its IP at the heart of the ecosystem—the silicon chip—SonicEdge has made a strategic move to ensure its technology is not just an option, but a fundamental building block for the devices of tomorrow. This CES announcement may be the moment that a technology once confined to niche applications began its journey to becoming the new standard for personal sound.

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 9038