Beyond Parental Controls: TeenAegis Targets Big Tech's Risky Design
- 26% surge in online child sexual abuse crimes in England and Wales (2024)
- 56% rise in online enticement reports in the U.S. (first half of 2025)
- 6,341% increase in AI-related child exploitation reports (first half of 2025)
- $375 million penalty against Meta for failing to protect children (2026)
Experts increasingly view online child safety risks as systemic failures of platform design, with legal and regulatory actions gaining traction to hold Big Tech accountable.
Beyond Parental Controls: TeenAegis Targets Big Tech's Risky Design
SAN FRANCISCO, CA – March 31, 2026 – A new company is launching today with a bold strategy to protect children online, not by monitoring them, but by holding technology platforms accountable for systemic failures. TeenAegis, led by former Bank of America Vice Chairman Siobhan MacDermott, is rolling out an intelligence platform designed to expose the design flaws in social media, gaming, and AI that lead to child exploitation, grooming, and mental harm.
The company is positioning itself as a new category entirely, moving beyond the saturated market of parental control apps. Instead of surveillance, TeenAegis aims to supply data for lawsuits and regulatory action, effectively creating an accountability engine for Big Tech.
"Let's be clear—these are not edge cases. These are predictable outcomes of product design," said Siobhan MacDermott, Founder and CEO of TeenAegis, in a statement. "As AI accelerates interaction and scale, these risks don't diminish—they compound. TeenAegis exists to surface that reality and make it impossible to ignore."
A Digital World Risky by Design
The launch comes at a critical moment, as data reveals a deepening crisis in online child safety. The digital environment, often described by experts as "risky by design," is increasingly linked to severe harm. In 2024, online child sexual abuse crimes in England and Wales surged by 26%, with platforms like Snapchat and Instagram identified as primary venues.
In the United States, the National Center for Missing and Exploited Children (NCMEC) has tracked an alarming rise in specific threats. In the first half of 2025 alone, reports of online enticement jumped 56%, while financial sextortion cases—a brutal form of blackmail disproportionately affecting boys and linked to at least 36 teen suicides since 2021—rose by nearly 70%.
The rapid advancement of artificial intelligence has added a frightening new dimension. NCMEC reported a staggering 6,341% increase in AI-related child exploitation reports in the first half of 2025, as predators leverage generative AI to create abusive material and simulate harmful interactions.
This crisis is increasingly being viewed not as a series of isolated incidents, but as a direct consequence of platform architecture. Features like infinite scroll, autoplay videos, and engagement-based algorithms have been shown to foster addiction and expose minors to content romanticizing suicide, eating disorders, and self-harm. This perspective is gaining legal traction. Just this month, separate U.S. juries found Meta and Alphabet liable for harm to children, with verdicts focusing on negligent product design rather than user content. A New Mexico court levied a $375 million penalty against Meta for failing to protect children, marking a potential turning point in a legal landscape long dominated by Section 230 protections for platforms.
Intelligence Over Surveillance
TeenAegis is betting that the key to reversing these trends lies in intelligence, not surveillance. The company's platform explicitly rejects the parental control model of monitoring a child's every click. Instead, its core components—an Intelligence Hub and a real-time Command Center—are built to function like a cybersecurity operations center for child safety.
The platform applies methodologies from the financial and cybersecurity sectors to track systemic risk. This involves analyzing behavioral patterns across platforms, identifying grooming pathways, and flagging design vulnerabilities that create opportunities for harm. By focusing on the ecosystem rather than the individual, the company aims to provide a macro-level view of how platforms fail to protect young users.
This approach stands in stark contrast to the internal safety measures of tech giants, which are often criticized as reactive and secondary to their core business models of maximizing user engagement. While platforms employ content moderators and AI filters, their fundamental design often remains unchanged. TeenAegis seeks to provide the independent, external intelligence needed to challenge that status quo.
"Fines have not worked. PR statements have not worked," MacDermott stated. "What has been missing is independent, intelligence-driven visibility into how these systems operate—and who is responsible when they fail."
Forging Accountability in Courtrooms and Congress
The ultimate goal of this intelligence-gathering is to create consequences. TeenAegis plans to support litigation and regulatory action by providing evidence-based insights for investigations and duty-of-care analysis. This could arm lawyers and lawmakers with the data needed to build stronger cases against platforms for product liability and negligence.
The timing aligns with a growing legislative push for accountability on both sides of the Atlantic. In the U.S., the bipartisan Kids Online Safety Act (KOSA) continues to gain momentum, aiming to establish a duty of care for platforms to act in minors' best interests. In Europe, the Digital Services Act (DSA) already imposes strict risk assessment and mitigation obligations on major platforms, with specific mandates for protecting children.
TeenAegis also intends to introduce a public accountability layer, ranking major technology and AI platforms on their safety performance. By making these intelligence-backed assessments public, the company hopes to empower consumers, advertisers, and investors to apply market pressure on companies that fall short.
Alongside its intelligence-gathering operations, the company is also launching TeenAegis Sentinel, an interactive, game-based experience. This tool aims to educate families on how digital threats unfold without resorting to invasive monitoring, promoting digital literacy and resilience as a first line of defense. This dual approach—empowering families while holding systems accountable—signals a comprehensive strategy to reshape the digital landscape for the next generation.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →