FB pixel

Deepfakes are testing the limits of American governance

Deepfakes are testing the limits of American governance
 

Under the looming omnipresence of AI, the United States finds itself at a crossroads in determining how best to regulate the rise of synthetic media which now threatens everything from personal privacy to national security.

At the heart of this debate lies a provision quietly inserted into the House Energy and Commerce Committee’s budget reconciliation bill that proposes a ten-year moratorium on all state and local regulations of AI systems, including AI-generated content like deepfakes. This moratorium, if enacted, would prohibit states and local jurisdictions from passing or enforcing laws that govern AI models or decision-making systems.

The justification for the Republican-backed moratorium is grounded in the desire to prevent a patchwork of inconsistent laws across states and to foster uniform federal oversight. Yet, the consequences of such a sweeping federal preemption of AI regulation could be catastrophic at a time when deepfake technology is proliferating with unprecedented speed and ferocity.

According to a 2025 report by Sumsub, deepfake incidents have increased by an astounding 245 percent year over year. This spike is not merely statistical, it is impacting lives, institutions, and economies in real terms. Twenty-six percent of executives surveyed reported that their financial or accounting departments had been targeted by deepfake-powered fraud in the past year alone. The technology’s accessibility and affordability have accelerated its abuse, enabling malicious actors to generate persuasive real-time fake voices, images, and videos with minimal expertise.

Meanwhile, legislative responses have struggled to keep pace. Most existing laws address deepfakes only after harm is done, placing the burden on victims to detect and respond to violations that are often invisible until it is too late. At the same time, national regulatory inertia contrasts sharply with more proactive efforts at the state level.

In the past two years, over 120 deepfake-related laws have been introduced or enacted by states, covering everything from nonconsensual sexual imagery to political disinformation. California passed eight bills addressing synthetic media in a single month. Tennessee and Iowa have criminalized sexually explicit AI-generated content, while New Jersey recently implemented fines of up to $30,000 for malicious deepfake creation.

These state laws represent not only legislative innovation, but also a clear public mandate to secure digital spaces against deception. Removing the states from this equation risks halting the only meaningful progress the U.S. has made in regulating deepfakes.

“We’re in a very dangerous time, and we’re playing defense on everything that we do,” said Josh Lowenthal, a California state Democrat legislator.

“Any comprehensive preemption of state and local regulation creates potential security gaps that may leave citizens and organizations vulnerable to emerging threats, wrote Reality Defender Co-Founder and CEO Ben Colman. “The collaborative development of balanced regulatory frameworks – alongside technological solutions – represents the most promising path toward an AI landscape that fosters both innovation and trust.”

The reality on the ground though reinforces the urgency of immediate interventions, state lawmakers and privacy rights advocates argue. In April, for example, the Federal Bureau of Investigation issued a warning about a widespread smishing and vishing campaign that leveraged AI-generated voice and text messages to impersonate senior U.S. officials. The attackers used these messages to lure victims into secondary platforms, ultimately stealing login credentials or soliciting money.

“Synthetic identity and credit washing fraud have hit another record high and are showing no signs of slowing down, according to a new report by TransUnion,” the Information Security Media Group said Friday, adding that because of “synthetic identity frauds to AI-enhanced insurance scams and global malware takedowns, today’s fraud landscape demands vigilance across every sector and system.”

These tactics mirror similar schemes that are occurring in the private sector where companies like Arup and Ferrari have suffered millions of dollars in losses due to AI-powered fraud. What used to be the stuff of science fiction is now a daily operational threat, one that blends seamlessly into communications networks and exploits every gap in existing regulatory frameworks that were not designed to address deepfakes.

Despite this, the federal government under the Trump administration has charted a course that prioritizes deregulation over security. The administration’s AI Action Plan, shaped by more than 10,000 public comments and led by the Office of Science and Technology Policy, touts American innovation as its centerpiece. Yet beneath the rhetoric lies a systematic dismantling of safeguards.

Over $328 million in National Science Foundation grants related to disinformation, biometric security, and AI risk modeling have been eliminated, including funds that supported election security and deepfake detection. The administration has also framed efforts to counter deepfake content as threats to free speech, a position that conflates content moderation with ideological censorship.

The executive order driving this deregulatory push accused the previous administration of suppressing free expression by supporting AI transparency initiatives, an accusation that isn’t grounded in fact. The consequences of this posture are already visible. A case in Pennsylvania highlights the stakes. A police officer caught with AI-generated sexual images of minors could not be charged due to the absence of relevant laws at the time. Only after a new ban took effect could the state bring charges in a separate case involving a similar crime.

In Iowa, prosecutors struggled to pursue justice in a case involving high school students who circulated deepfake nudes of classmates. Cross-border legal challenges and lack of cooperation from overseas app developers complicated efforts, revealing a gap in international enforcement capabilities.

Meanwhile, courts are beginning to see lawsuits challenging the very laws meant to regulate deepfakes, with one California statute paused by a federal judge who argued it was too blunt an instrument. Opponents claim these laws infringe upon satire and parody, particularly in political contexts. Plaintiffs include conservative content creators and platforms like Rumble and X, which argue that deepfake laws inhibit expression and innovation.

However, opposition to regulation cannot ignore the immense risk that is posed by unchecked synthetic media. Deepfake impersonations are no longer limited to satire or adult content. They have been used to deceive senators into meetings with fake foreign officials, to simulate CEO voices in fraudulent WhatsApp messages, and to manipulate video footage in politically volatile environments.

Pindrop, a company specializing in voice authentication, reported a 683 percent increase in deepfake audio attacks in 2024, and says it sees up to seven synthetic voice scams per day targeting major financial institutions. The challenge is not just determining who someone is, but whether they are real. That is the level of existential verification the U.S. must now address.

Despite the growing crisis, federal policy continues to trail behind the needs of industry, law enforcement, and civil society. Comments submitted to the administration’s AI Action Plan make clear that synthetic media is not a fringe issue, but rather it is a central threat to economic and social trust.

Organizations like iProov and the Messaging, Malware and Mobile Anti-Abuse Working Group (M3AAWG), have called for federal investment in liveness verification, real-time detection systems, and biometric authentication standards. Their message was unequivocal: without a national infrastructure for verifying reality, AI adoption will falter under the weight of mistrust.

Yet, the Trump administration has offered no comprehensive roadmap for how to mitigate these risks. It instead has repealed Biden-era AI regulatory Executive Orders and policies and has focused on eliminating “barriers to innovation” rather than constructing safeguards for security. This vision of American leadership in AI, based solely on deregulation and private sector agility, is blind to the structural vulnerabilities that unequivocally have been exposed by dangerous and damaging AI-generated deepfakes.

Innovation without trust is unsustainable. If Americans cannot believe what they see, hear, or read, they will not engage with the technologies that produce these outputs, no matter how efficient, intelligent, or economically beneficial they may be.

The argument that AI regulation must be careful not to stifle creativity or economic growth is valid, but the solution is not to halt regulation entirely; it is to construct it wisely. And that means a federal regulatory framework that distinguishes between high-risk and low-risk AI use cases, encourages industry cooperation, and includes a sunset clause allowing states to act when the federal government does not.

It also means restoring federal funding for detection and verification research, mandating provenance markers in AI-generated content, and launching public education initiatives to improve media literacy.

The U.S. cannot afford to treat deepfakes as merely a legal curiosity or a civil liberties debate. They are tools of fraud, manipulation, and digital sabotage. The growing consensus among industry experts, public officials, and cybersecurity organizations is that synthetic media must be regulated. Proactively, proportionately, and without ideological distortion. The choice is not between freedom and regulation, it is between chaos and coherence.

A ten-year moratorium on state action devoid of federal substitution invites regulatory paralysis at the precise moment when decisive action is most needed. Deepfakes do not pause for policy debates. They evolve, proliferate, and destabilize. The longer the government delays, the more difficult it will be to restore public trust and reign in harmful synthetic media.

In the end, a credible AI strategy must prioritize trust as its first principle. Trust is not a luxury of governance, it is the foundation upon which innovation, economic growth, and civic engagement are built. Without it, the American AI project will not lead the world. It will lose itself in illusion.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics providers and systems evolve or get left behind

Biometrics are allowing people to prove who they are, speeding journeys through airports, and enabling anonymous online proof of age,…

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events