FBI warns of AI-generated voice scams impersonating government officials

In a stark and urgent warning, the Federal Bureau of Investigation’s (FBI) Internet Crime Complaint Center has alerted the public and government personnel to be alert to an escalating malicious messaging campaign that leverages AI to impersonate senior U.S. officials.
These deepfake scam campaigns represent a sobering new stage in the evolution of social engineering threats. What began as crude email phishing schemes have morphed into synthetic voice and video impersonation campaigns that can deceive even the most experienced and security-conscious targets. This new frontier in cyber-enabled deception demonstrates the growing sophistication of threat actors and the increasing ease with which AI tools can be deployed for nefarious purposes.
“Since April 2025,” the FBI explained, “malicious actors have” used a combination of AI-generated voice messages and text messages to gain unauthorized access to personal and official accounts. These “malicious actors … claim to come from a senior U.S. official in an effort to establish rapport before gaining access to personal accounts,” the FBI said.
Continuing, the FBI said, “One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform. Access to personal or official accounts operated by U.S. officials could be used to target other government officials, or their associates and contacts, by using trusted contact information they obtain. Contact information acquired through social engineering schemes could also be used to impersonate contacts to elicit information or funds.”
One notable instance involved the impersonation of White House Chief of Staff Susie Wiles. Unidentified individuals sent messages and made calls to senators, governors, and business leaders pretending to be Wiles. Some recipients reported hearing a voice resembling Wiles, likely generated through AI. Requests included sensitive information such as lists of individuals eligible for presidential pardons and financial transfers.
According to the FBI, scams like this have specifically targeted current and former high-ranking officials within the federal and state governments, along with individuals in their networks. The primary goal appears to be establishing rapport through believable impersonation to manipulate targets into divulging sensitive information or clicking malicious links. These links often masquerade as part of an ongoing conversation, typically under the pretense of shifting the dialogue to a different messaging platform. Once clicked, they can compromise the target’s device, credentials, or broader network.
The danger lies not only in the technical means of exploitation but in the psychological manipulation enabled by synthetic media. Deepfake technology uses machine learning to replicate the voice and speaking style of a real individual with such precision that distinguishing real from fake becomes nearly impossible without specialized tools. In many cases, the simulated voices are so authentic that even seasoned professionals have been unable to discern the deception without forensic analysis. Similar techniques have been used to create fake videos, but the current campaign relies primarily on voice, which remains more difficult to detect and verify in real-time conversations.
The FBI’s advisory places special emphasis on the use of smishing and vishing tactics. Smishing refers to the use of SMS or MMS messages for malicious intent, while vishing involves voice-based communication, now increasingly bolstered by AI-generated audio. Both are designed to appear credible and personal, taking advantage of the perceived trust and urgency that comes with receiving a direct message from a recognizable authority.
A specific example cited in the FBI’s advisory illustrates how attackers encourage targets to switch platforms during a conversation and then use the opportunity to introduce malware or fraudulent links. These may lead to actor-controlled websites designed to harvest login credentials, deploy surveillance software, or pivot further into governmental systems. The FBI warns that once one account is compromised, it may serve as a launching point for additional attacks against other officials or their close contacts, creating a cascade of security vulnerabilities across multiple nodes of government and institutional trust.
This advisory coincides with a broader pattern of AI-enabled scams that have made headlines during the past year. One prominent example involved a phishing campaign targeting LastPass, a widely used password manager. In that case, attackers used a deepfake audio recording to impersonate LastPass CEO Karim Toubba during a voice call to an employee, hoping to coerce them into revealing their master credentials.
LastPass Senior Principal Intelligence Analyst Mike Kosak wrote in April that, “In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.”
Although unsuccessful, the incident demonstrates the potency of synthetic voice in breaching the human layer of security.
Another major case emerged during the 2024 election cycle when a robocall campaign used a deepfake version of then-President Joe Biden’s voice to urge New Hampshire Democrats to abstain from voting. The calls triggered both public outcry and legal consequences. A Democratic consultant was indicted and the telecommunications provider that failed to authenticate the calls was fined $1 million by the Federal Communications Commission for violating federal caller ID authentication rules.
As these incidents illustrate, deepfake audio is not confined to criminal fraud. It is also being deployed in political influence operations and state-sponsored espionage. The FBI’s new warning elevates the concern to a national security level by acknowledging that attackers are now impersonating government officials not just for personal gain, but potentially to undermine institutional integrity or extract classified information.
Compounding the threat is the accelerating sophistication of deepfake technology. It wasn’t that long ago that constructing a convincing voice clone required a substantial sample of audio from the target speaker. Now, however, widely available AI tools can replicate someone’s voice convincingly using as little as a few seconds of reference audio. This ease of access, combined with the public availability of high-quality voice samples from officials who frequently speak in public, has dramatically lowered the barrier for attackers to deploy convincing impersonations at scale.
Cybersecurity experts monitoring these campaigns note that the voice synthesis tools being used are not restricted to clandestine groups or state actors. Many are commercially available or open source, placing powerful impersonation capabilities in the hands of virtually anyone with basic technical knowledge. According to threat analysts at Reality Defender, these scams are no longer isolated events but reflect a systematic weaponization of AI voice modeling against high-value targets, including government institutions and corporate executives.
The campaign’s objective is often financial, with scammers creating a sense of false urgency to push targets toward rapid decision-making. For example, impersonators might claim that immediate wire transfers are required to resolve national emergencies, policy disputes, or internal investigations. Such tactics work precisely because they exploit ingrained trust in recognizable authority figures and the pressure to comply quickly in high-stakes scenarios.
To mitigate this growing threat, the FBI has issued a range of recommendations intended to help individuals identify and respond to suspicious communications. Among them is emphasis on independent verification. If a call or message appears to be from a government official regardless of how convincing it may sound, recipients should pause, research the contact through official channels, and confirm their identity before taking any action.
The FBI also urges heightened scrutiny of digital artifacts. This includes close inspection of email addresses, phone numbers, URLs, and any unusual spelling or formatting in messages. Scammers may use visually similar characters or domain spoofing to mask fraudulent origins. In some cases, they’ve also incorporated AI-generated photographs, altered names, or slightly modified contact information to further the illusion of legitimacy.
Visual and audio anomalies also can serve as additional clues. Subtle imperfections in imagery such as distorted facial features, unrealistic shadows, or awkward hand movements may signal the presence of synthetic media. In audio, listeners are advised to focus on cadence, tone, and unnatural pauses. However, given the quality of modern voice cloning, these indicators may not always be present.
The FBI acknowledges that AI-generated content has advanced beyond the point of easy detection, and because it has it encourages institutions to deploy real-time deepfake detection systems at critical communication points. These tools, some of which are now being integrated into enterprise communication platforms, use pattern recognition and AI reverse-engineering to flag anomalous audio or visual signals. Agencies and corporations that handle sensitive data or financial transactions are especially urged to adopt these measures as a first line of defense.
The FBI stresses that failing to implement verification protocols and detection safeguards is equivalent to leaving one’s front door unlocked in a high-crime area. With AI-enabled impersonation scams proliferating and maturing rapidly, proactive defense is no longer optional. It is a foundational requirement for maintaining trust in digital communications that underpin government operations, business decisions, and personal interactions alike.
The FBI’s warning makes clear that vigilance, education, and technical preparedness are now essential components of national cybersecurity. Not just for IT departments, but for every individual who answers a phone or opens a message purporting to come from someone in power.
Article Topics
deepfake detection | deepfakes | FBI | fraud prevention | generative AI | LastPass | Reality Defender | synthetic voice | U.S. Government
Comments