AI Clones of Kardashian Used in Scam, Prompting New Safety Initiative
- 250% increase in deepfake incidents in 2024
- 72% of Americans reported seeing fake celebrity endorsements (2025 McAfee Labs survey)
- $90% of online content could be synthetically generated by late 2026 (expert prediction)
Experts warn that AI-driven deepfake scams targeting celebrities are escalating rapidly, requiring urgent digital literacy efforts and stronger regulations to protect public trust and prevent financial harm.
AI Clones of Kardashian Used in Scam, Prompting New Safety Initiative
LAS VEGAS, NV – January 15, 2026 – An entertainment and media platform has launched a major digital safety initiative after its founder was the target of a year-long, sophisticated impersonation scheme that used AI-generated likenesses of celebrities, including Kim Kardashian and Kylie Jenner. GonnaHappen, a VIP experiences company, announced its AI Impersonation Awareness & Digital Safety Initiative today, aiming to combat the escalating threat of deepfake technology being used for fraud.
The move was prompted by a direct and prolonged attack on the company's founder, Aaron G. Beebe, who was ensnared in a web of deception involving AI-generated videos, cloned voice messages, and a network of fake social media accounts. The case serves as a stark illustration of how easily public figures' identities can be weaponized, turning celebrity recognition into a tool for manipulation and crime.
The Deepfake Deception: A Personal Encounter
The campaign against Beebe was not a simple phishing email but a coordinated, multi-pronged operation. According to GonnaHappen, the perpetrators deployed a complex strategy that included AI-generated video and voice messages that convincingly mimicked Kim Kardashian. This initial contact then expanded to include accounts impersonating Kylie Jenner and a cast of fabricated personas posing as agents, security staff, and handlers.
The goal was to build a false sense of trust and exclusivity. The scammers made fraudulent offers of employment, exclusive meet-and-greets, and other VIP access opportunities. To bolster their credibility, they provided fabricated gift shipment details, complete with fake tracking numbers and credentials. As the scheme progressed, the tactics escalated into attempts at account takeovers and phishing attacks, alongside offers of cryptocurrency and requests for financial information.
“This was not a random incident,” said Aaron G. Beebe in a statement. “What we identified was a coordinated impersonation model leveraging AI technology, celebrity recognition, and emotional manipulation. Once the scope became clear, the focus shifted from personal impact to preventing others from being targeted.”
The methods used in the attack, such as disappearing AI video messages that prevent verification and the use of multiple personas to create a sense of a professional operation, mirror tactics that cybersecurity experts have identified as hallmarks of modern, AI-driven fraud.
Fame as a Weapon: The Rising Tide of Celebrity Impersonation
The incident targeting GonnaHappen is a microcosm of a much larger and rapidly growing problem. The use of AI to create deepfakes—synthetic media where a person's likeness is replaced with someone else's—is surging. Research indicates that deepfake incidents increased by over 250% in 2024, with some experts predicting that up to 90% of all online content could be synthetically generated by the end of 2026. The technology, once the domain of specialists, is now widely accessible, with the cost to create a convincing deepfake estimated to be as low as a few dollars.
Celebrities are prime targets. Their high visibility and the public's familiarity with their voice and appearance make them ideal subjects for impersonation. Scammers exploit this to create what is known as the "celebrity hijack" formula, using deepfaked endorsements for fraudulent investment schemes, fake giveaways, and political disinformation. High-profile figures like Taylor Swift, Elon Musk, and even YouTube star Mr. Beast have had their likenesses stolen for scams that have cost victims millions of dollars.
A 2025 survey by McAfee Labs revealed the scale of the issue: 72% of Americans reported seeing a fake celebrity endorsement, and nearly 40% admitted to clicking on one. For the entertainment industry, which is built on authenticity and fan trust, the threat is existential, capable of damaging reputations and eroding the connection between artists and their audiences.
Fighting Fire with AI: A Call for Digital Literacy
In response, GonnaHappen is positioning its initiative as a necessary defense for the communities it serves. The AI Impersonation Awareness & Digital Safety Initiative is not about sensationalism, but education. The platform plans to roll out public education campaigns to teach fans and industry professionals how to identify the tell-tale signs of AI-driven scams. This includes raising awareness about pressure tactics involving urgency and secrecy, fake security alerts, and the use of disappearing media.
“This is not about celebrity gossip or speculation,” Beebe stated. “It’s about protecting fans, creators, and the public figures whose identities are being misused. Responsible platforms have a role to play in addressing this issue before more people are harmed.”
The initiative will also establish clear pathways for reporting suspected impersonation activity and will feature collaborations with cybersecurity and digital ethics professionals. It represents a proactive step from within the entertainment sector, acknowledging that platforms at the intersection of media, nightlife, and influencer culture are uniquely vulnerable and thus have a unique responsibility to act.
The Legal and Ethical Battlefield
As technology races ahead, lawmakers and regulators are scrambling to keep pace. The legal landscape surrounding AI and deepfakes is a complex patchwork of new and proposed legislation. The European Union has taken a comprehensive approach with its AI Act, set to come into force in August 2026, which imposes strict transparency and labeling requirements on systems that generate deepfake content.
In the United States, efforts are more fragmented. The DEFIANCE Act, passed by the Senate in January 2026, creates a federal right for victims of non-consensual sexually explicit deepfakes to sue the creators. Other bills, like the AI Impersonation Prevention Act and the AI Fraud Deterrence Act, aim to criminalize the use of AI for impersonating federal officials or committing fraud. However, a comprehensive federal framework remains elusive, and state-level laws often face First Amendment challenges.
This regulatory uncertainty highlights the critical need for public vigilance and corporate responsibility. With enforcement struggling to catch up to the global and ephemeral nature of these scams, the first line of defense is an informed public.
“Technology is advancing faster than public awareness,” Beebe concluded. “As AI becomes more convincing, education becomes essential. This initiative is about doing the right thing — for fans, for creators, and for the public figures whose names are being exploited.”
📝 This article is still being updated
Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.
Contribute Your Expertise →