Parents vs. Preemption: The Battle for AI Regulation and Child Safety
As bereaved parents protest in DC, a high-stakes battle rages over a policy that could gut state AI laws and redefine online child safety forever.
Parents vs. Preemption: The Battle for AI Regulation and Child Safety
WASHINGTON, DC – December 02, 2025 – As evening fell on the nation's capital, the cold marble of the National Gallery of Art became a canvas for a desperate message. “Stop Sacks' AI Preemption,” “AI Amnesty Harms Kids,” and “Don't Let AI Buy the Government” flashed in projected light just blocks from the U.S. Capitol, a stark protest staged not by lobbyists, but by bereaved parents who have lost children to online harms.
On the eve of a major House hearing on child safety, these families, including mother-advocates Lori Schott and Maurine Molak, took their fight public. Their target is a complex but consequential policy known as “AI preemption,” a provision being quietly negotiated for inclusion in the must-pass National Defense Authorization Act (NDAA). If successful, it would create a federal ceiling for AI regulation, effectively blocking states from passing their own, often tougher, laws to protect citizens—especially children—from the dangers of rapidly advancing artificial intelligence. The protest shines a harsh light on a critical question shaping the future of business and society: Who gets to write the rules for AI?
The Push for Federal Control
At the heart of the conflict is a powerful push from Silicon Valley and its allies in Washington to establish a single, uniform federal standard for AI. Proponents, including the figure named in the protest, tech investor David Sacks, and factions within the Trump administration, argue that a “patchwork quilt” of state regulations would stifle innovation, create confusing compliance burdens for businesses, and hinder America's ability to compete globally in the AI race. This sentiment has been echoed by major tech trade associations and is backed by a formidable lobbying campaign.
The strategic vehicle for this policy push is the NDAA, the annual defense spending bill that is considered essential legislation. Attaching unrelated policy measures to must-pass bills is a time-honored, if controversial, legislative tactic. A similar attempt to impose a moratorium on state AI laws was overwhelmingly defeated in the Senate earlier this year, but its revival within the NDAA framework signals a renewed and serious effort.
For the parents and child safety advocates on the ground, this legislative maneuvering feels like a betrayal. They argue that while the federal government has been paralyzed, states have stepped up to provide the only meaningful protections against AI-driven harms.
“Parents are already seeing AI-powered algorithms manipulate their children's feeds and serve them harmful content, and Big Tech wants to use watered-down kids safety legislation as a bargaining chip to prevent states from doing anything about it,” said Lori Schott, whose daughter died by suicide after what she describes as depression caused by TikTok. “Federal AI preemption would abandon families at the very moment we need protection most, blocking states from safeguarding our children against AI harms while Washington offers nothing but empty promises in return.”
States as Laboratories of Digital Safety
The opposition to preemption is not just emotional; it is a broad, bipartisan coalition of state officials who see it as a direct assault on their ability to govern. A group of 36 State Attorneys General and 280 state lawmakers from 43 states have formally urged Congress to reject the measure. They contend that states are acting as crucial “laboratories of democracy,” developing innovative and responsive policies that address real-world harms as they emerge.
In 2025 alone, lawmakers in all 50 states introduced AI-related bills, with nearly 100 measures being enacted across 38 states. These aren't abstract regulations; they are targeted solutions. New York recently passed a law requiring AI chatbots to detect and respond to users expressing suicidal thoughts. California enacted a landmark law demanding transparency and risk assessments from developers of the most powerful AI models. Colorado is set to implement rules protecting citizens from algorithmic discrimination. Critically, 45 states have passed laws targeting the creation of AI-generated child sexual abuse material (CSAM), a rapidly growing scourge.
“States have stepped up to fill the void left by federal paralysis, and they should be applauded for protecting their children, not punished with preemption measures that serve corporate interests and tech billionaires over child safety,” said Sarah Gardner, CEO of Heat Initiative, which supported the protest.
Advocates for states' rights argue that nullifying these existing laws without a robust federal alternative in place would leave children and consumers dangerously exposed. “For young people and parents, federal AI preemption means losing the only real protections we’ve won,” added Ava Smithing, Advocacy Director at the Young People’s Alliance.
The Influence Industry and Legislative Chess
The fight over preemption is playing out against the backdrop of a reported $150 million AI lobbying war. On one side, industry-backed groups like the “Leading the Future” Super PAC—funded by entities like Andreessen Horowitz and OpenAI President Greg Brockman—are spending tens of millions to advocate for a unified federal law. Their core message is that regulatory consistency is essential for economic leadership.
On the other side, new public-interest groups like “Public First” and “Americans for Responsible Innovation” are mobilizing to support candidates who favor stronger oversight and oppose preemption. This clash of financial titans underscores the immense stakes of AI governance.
The debate is further complicated by its linkage to the Kids Online Safety Act (KOSA). Child safety advocates describe the version of KOSA being considered in the House as a “weaker” proposal than the Senate's version, which includes a stronger “duty of care” provision requiring platforms to act in the best interests of minors. Some legislative observers believe the preemption push is being strategically bundled with watered-down child safety measures to make it more palatable to lawmakers.
For now, the policy's fate rests with congressional negotiators working behind closed doors on the final text of the NDAA. Key senators have already labeled the preemption clause a “poison pill” and threatened to block the entire defense bill if it is included. This sets the stage for a dramatic showdown. As the images fade from the walls of the National Gallery, the battle they represent—between grieving families and corporate giants, states' rights and federal authority, caution and innovation—is reaching its critical moment.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →