New Watchdog Tackles AI's Impact on Reality and Creator Rights
The AI Rights Project has launched with bold plans to legislate digital reality and protect creators. Here’s why it matters for tech, markets, and you.
New Watchdog Tackles AI's Impact on Reality and Creator Rights
PHOENIX, AZ – December 03, 2025 – As artificial intelligence continues its relentless integration into the fabric of society, a new independent organization has emerged from Phoenix with an ambitious goal: to establish clear rules of the road for a technology that is rapidly outpacing law and public understanding. The AI Rights Project announced its official launch today, positioning itself as a nonpartisan public education and policy initiative dedicated to helping citizens, students, and creators navigate the complex new terrain of AI.
Founded by intellectual property and AI attorney Jim W. Ko, the organization aims to fill what it calls a “critical gap” between technological advancement and civic preparedness. Operating under a “Know Your AI Rights™” framework, the project is moving beyond abstract ethical discussions to propose concrete legislative models. With a stated independence from “Big AI influence,” the initiative is debuting with two potent policy proposals aimed at two of the most contentious issues in the AI space: the erosion of reality through synthetic media and the mass ingestion of copyrighted works to train AI models.
Legislating Authenticity in a Post-Truth Era
The first major pillar of the project’s strategy confronts the escalating crisis of digital authenticity. As AI-generated images, audio, and video become virtually indistinguishable from reality, the potential for mass deception, fraud, and the erosion of civic trust grows daily. While states like Arizona have taken initial steps to regulate deepfakes in the context of elections, The AI Rights Project argues these measures are too narrow.
To address this, the organization has drafted the AI-Simulated Realism Disclosure & Transparency Act. This model legislation, designed first for Arizona but intended as a national template, proposes a simple yet powerful principle: disclosure over suppression. Rather than attempting to ban synthetic media—a move fraught with First Amendment challenges—the bill would mandate clear, consistent disclosures on AI-generated or significantly altered media that is presented as real. The goal is to empower the public with the information needed to distinguish genuine content from sophisticated simulation, a right the project frames as fundamental to democratic integrity.
This approach aligns with a broader push for transparency seen in regulatory discussions from the White House’s “Blueprint for an AI Bill of Rights” to legislative efforts across the country. The proposed act, however, seeks to create a comprehensive, year-round standard that extends beyond political advertising. It also introduces a public-interest enforcement mechanism, giving individuals a tool to hold creators and distributors accountable for undisclosed simulations. By focusing on a right to truthful information, the project hopes to build a legally sound foundation for trust in an era where reality itself is becoming a negotiable commodity.
Redefining Copyright for the Machine Age
The second front in the project’s campaign targets the economic and ethical turmoil roiling the creative industries. The rise of generative AI has been fueled by training models on vast datasets of text, images, music, and code scraped from the public internet. This practice has triggered a firestorm of litigation and a fundamental debate over ownership, with creators arguing their work is being used without consent or compensation to build systems that could ultimately replace them.
In response, The AI Rights Project has developed The AI Training & Copyright Fairness Act. This model bill directly challenges two core premises underpinning the current AI development boom. First, it refutes the notion that making a work publicly available—the very act by which creators share their art with the world—constitutes implied permission for it to be used in AI training. Second, it questions whether the automated, large-scale copying involved in machine learning should be treated as equivalent to human learning under the “fair use” doctrine of copyright law.
This initiative enters a fiercely contested legal landscape. High-profile lawsuits, such as Getty Images’ case against Stability AI and class-action suits from author and artist guilds, are already forcing courts to stretch analog-era laws to fit unprecedented digital technologies. The U.S. Copyright Office is also actively studying the issue, having affirmed that human authorship remains a prerequisite for copyright protection. The proposed act aims to provide statutory clarity where judicial interpretation is currently fractured. It calls for transparent, rights-protective standards for how developers obtain, document, and disclose training data, seeking to return to a first principle of copyright: copying requires consent.
While critics of such regulation argue it could stifle innovation, the project counters that clear, balanced rules are essential for creating the legal certainty and public trust necessary for sustainable technological growth. The bill’s core message is that the future of AI creativity must be built with human creators, not on top of them.
An Ecosystem of Education and Advocacy
Beyond its headline legislative efforts, The AI Rights Project is building a broader ecosystem aimed at fostering widespread AI literacy and civic engagement. Its public education initiatives include a foundational eBook, a “Bill of AI Rights,” and a video series designed to demystify how large language models (LLMs) work. These resources are intended to provide trusted, nonpartisan guidance to families, educators, and community leaders.
Furthermore, the organization plans to launch an AI Rights Advocacy Certification Program to train a new generation of leaders to effectively advocate on these complex issues. It also intends to weigh in on pivotal legal battles by participating as amicus curiae (friend of the court) in cases involving AI-simulated realism, copyright, algorithmic bias in areas like healthcare and employment, and the technology’s impact on democratic participation.
“Rights are only as strong as our understanding of them,” said Founder and Executive Director Jim W. Ko in the launch announcement. “Every citizen deserves clear, trusted information about how AI affects their lives and freedoms.” To underscore its commitment to human creativity, the project has even launched an art competition to replace the interim AI-generated visuals on its website with human-made artwork. By combining detailed policy work with grassroots education, The AI Rights Project is making a calculated bid to ensure that the public has a formidable voice in shaping the rules that will govern our collective digital future.
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →