Blurred line between deepfakes and reality opens door to invasive fraud

Fraud mutates on the daily. Like a festering mold or a parasitic fungus, it throbs and grows, finding new corners to colonize. And the costs keep piling up: According to the U.S. Federal Trade Commission (FTC), in 2024, fraud or identity theft-related scams accounted for losses of more than $20 billion, with more than 22 million victims recorded nationwide. And the IRS says the impersonation of government entities could alone cost upwards of $800 million in losses by the end of 2025. Hiring fraud is on the rise. Deepfake scams are spiking like the heart rate of a terrified rabbit; in 2024, a deepfake attempt occurred every five minutes. Any media could be synthesized. Any call could mean the end of your savings.
Cue the fraud busters.
Liveness detection can reduce risk of duplicating ID: Identy.io
Brazil’s Identy.io has incorporated passive liveness detection to help in the fight. “Faced with the growing risk of fraud and identity theft, it is essential to ensure that the digital transactions carried out with public and governmental entities, but also with financial institutions, have the highest guarantees of security and protection,” says a blog from the biometrics company.
Passive liveness detection “analyzes certain physical characteristics of the user, such as facial movements in the case of face verification solutions, without requiring the user to perform any specific movement or action. This technology not only makes identity verification more accessible to anyone, regardless of their background or technological knowledge, but also makes it virtually impossible to impersonate, as it is able to detect and discard any attempt to access the user’s digital credentials.”
Jesús Aragón, CEO of Identy.io, says that only biometric touchless verification solutions with passive liveness are the answer to the “social scourge” of fraud. “By reducing the risk of duplicating a user’s identity to virtually zero, solutions such as those offered by Identy.io can help save billions of dollars for users and public entities.”
Hiring fraud has reached level of ‘security crisis’: Glider AI
Glider AI, based out of Cupertino, California, has announced the launch of ID Verify, which a release calls “a secure identity verification product purpose-built to protect enterprises from rapidly growing and increasingly sophisticated hiring fraud.”
Are you hiring a real person? Or is that ideal software developer candidate on the Zoom call an AI-generated persona deployed by North Korean operatives to infiltrate hiring pipelines and gain access to sensitive systems? It is becoming increasingly harder to tell.
Glider AI says its research has revealed a 92 percent increase in candidate fraud compared to pre-pandemic levels. The firm’s CEO, Satish Kumar, says that’s “more than just a compliance issue – it’s a security crisis. Foreign adversaries are exploiting the hiring process to gain access to proprietary data and infrastructure. We built ID Verify to close this backdoor before it opens. We catch what others miss, whether it’s a fake ID, or a deepfake in a video interview.”
Eyes are out, scanning for those digital masks that mean to do us harm. Glider AI says its ID Verify product looks for fraud before, during, and after the interview process. It can detect deepfakes, with facial recognition and liveness checks. In real time, it can flag unauthorized use of AI for cheating. And it validates hundreds of global ID types from more than 150 countries in multiple languages.
SmartSearch partners with Daon on biometric integration
SmartSearch has released a whitepaper on deepfakes and future-proofing identity verification in the age of AI. It also marks the launch of an enhanced version of the firm’s SmartDoc product, with updated biometric liveness detection, document analysis and tamper detection.
The UK firm has announced a partnership with Daon, integrating its biometric identity assurance technology into SmartDoc. The integration serves compliance needs to meet AML legislation, KYC and KYB regulatory mandates. Per a release, SmartDoc uses machine learning and Optical Character Recognition, with extensive data coverage from more than 200 global sources, to combine authentication and biometric technology for validating identity documents, digitally authenticating customer identities and fighting fraud.
“Financial criminals are fast, agile and quick to adopt new technologies to conduct and conceal their illicit activities,” says Fraser Mitchell, chief product officer at SmartSearch Businesses. “Regulated sectors need to be equally armed with cutting edge innovations to fight money laundering and shield their organisation and customers from fraud. This is why we have partnered with Daon, to assist, equip and empower companies in regulated sectors, enabling them to simultaneously fight crime and grow their business with confidence.”
“Using identity verification and biometric authentication is key to building trust in the customer onboarding process,” says Clive Bourke, Daon’s president for EMEA and APAC. “We are delighted to have been selected by SmartSearch to provide our identity orchestration platform, TrustX for its SmartDoc solution and look forward to working together to enable businesses in regulated sectors to validate customer identities.”
Deepfake quality has taken us past the Uncanny Valley: Reality Defender
Ben Colman, CEO of Reality Defender, takes to the company blog to raise the alarm about just how good video deepfakes have gotten.
“People are finding it increasingly difficult to tell the difference between real and synthetic videos of people,” Colman says. “A recent study of more than 2,000 UK and U.S. consumers found that when presented with a mix of real and deepfake images and videos, only 0.1 percent correctly identified what was real and what was fake.”
There is a growing list of generative deepfake models capable of producing deepfakes so realistic they are impossible to recognize with the naked eye. Microsoft teased its VASA 1 engine, able to generate “hyper-realistic talking face video” from nothing but a single static image, an audio clip and a text script – but it opted to hold it back amid concerns about misuse.
No such luck with Google’s latest AI video generator, Veo 3, which Colman says is flooding the internet with clips, so real, they’ve moved us into a whole new era: “you could say we’ve left the uncanny valley into an era of indistinguishable synthetic media.”
Veo 3 gives any Google AI subscriber the ability to produce hyperrealistic deepfake video. But, as Colman notes, the sheer scope of options for deepfake fraudsters is part of the problem. He says “any user with a smartphone can use cheaper alternatives – like HeyGen avatars or open-source models – to generate deepfake videos with just a few clicks.”
Reality Defender’s VP of Human Engagement, Gabe Regan, also contributes thoughts on a blog about deepfakes in banking, and the growing threat to financial security.
“Sophisticated AI-generated and synthetic media can bypass traditional banking security protocols,” he says. “As a result, banking deepfake fraud is an escalating threat, with tens of billions of dollars at risk. Deloitte’s Center for Financial Services projects that generative AI could cause fraud losses of $40 billion in the U.S. by 2027.”
Regan underscores the need for multimodal verification, and the ability to detect threats in real time, with detection systems analyzing multiple signals, such as audio patterns, visual inconsistencies and behavioral anomalies.
Pindrop report shows continuing upward curve for deepfakes
Pindrop has released its 2025 Voice Intelligence & Security Report; unsurprisingly, it identifies fraud as a big, bad, escalating problem.
Pindrop’s findings show deepfake fraud attempts rising by more than 1,300 percent in 2024, jumping from an average of one per month to seven per day.
“Voice fraud is no longer a future threat – it’s here, and it’s scaling at a rate that no one could have predicted,” says Vijay Balasubramaniyan, CEO of Pindrop, in a release. “Deepfakes, synthetic voice tech, and AI-driven scams are reshaping the fraud landscape. The numbers are staggering, and the tactics are growing more sophisticated by the day.”
From spoofing-as-a-service platforms to AI-enhanced phishing and synthetic voice modulation, the arsenal is endless. Deepfaked calls are projected to increase by 155 percent in 2025.
Retail fraud, which rose 107 percent in 2024, is projected to more than double again in 2025, reaching 1 in every 56 calls.
The message is clear: protect yourself. Or else.
Article Topics
AI fraud | biometric liveness detection | biometrics | Daon | deepfake detection | deepfakes | Glider AI | Identy | Pindrop | Reality Defender | SmartSearch | synthetic faces | synthetic voice
Comments