FB pixel

‘KYC alone is not enough’: Proof, Reality Defender on threat of AI-driven fraud

‘KYC alone is not enough’: Proof, Reality Defender on threat of AI-driven fraud
 

The growing prevalence of generative AI has unlocked positive potential across a range of industries, but it has also amplified challenges, particularly around identity fraud and trust in digital transactions. In a webinar this week, industry leaders from Proof and Reality Defender discussed the shift in how AI-driven fraud is being carried out and the measures necessary to address emerging threats.

Kelly Pidhirsky, VP solutions consulting at Proof, highlighted the company’s journey from its roots as Notarize to becoming a platform centered on securing high-value digital interactions.

“We started out as the first company to bring legal notarizations online, and since then we’ve really revolutionized the traditional paper-bound processes, and it’s all about setting a new standard for digital interactions,” Pidhirsky says. “Fast forward to today, and it’s so much more than just online notarizations, not that that’s not incredibly important, it’s still the foundation, but we’re all about now becoming this comprehensive, identity-centric platform.”

Proof’s identity authorization network cryptographically binds verified real-world identities to digital documents, aiming to reduce fraud and ensure compliance. With a client base spanning over 7,000 organizations, including financial institutions, healthcare providers, and real estate firms, Proof has redefined trust in virtual environments.

The general consensus from the panelists is that generative AI has evolved significantly, with real-time video deepfakes emerging as the next frontier. While previous limitations, such as errors in extreme facial angles or actions like touching one’s hair, were evident even six months ago, recent breakthroughs have overcome these barriers. This progress allows fraudsters to convincingly impersonate individuals in real time, a feat that requires sophisticated training and computational resources.

“The rise of generative AI exacerbates identity trust issues,” Pidhirsky adds, pointing to the ease with which bad actors can exploit advanced tools to impersonate individuals and undermine KYC (Know Your Customer) protocols. “KYC alone is not enough,” she warns, citing an $81 billion annual loss attributed to identity-related fraud.

Battling deepfake threats

Mason Allen, head of sales at Reality Defender addresses the increasing sophistication of deepfake technology, which has evolved from crude digital imitations to hyper-realistic manipulations capable of undermining enterprise and government systems. Reality Defender, a New York-based cybersecurity company, employs multi-model approaches to detect signals of synthetic media in real time, offering defenses against fraudulent activity. Reaching $33 million in an expansion of its series A funding round last month, the company’s AI detection models, trained on extensive datasets, enable real-time identification of deepfake forgeries during critical moments of user verification.

“Deepfakes are the next iteration of a cat-and-mouse game akin to antivirus software,” Allen remarks, explaining how Reality Defender continuously adapts to emerging deepfake techniques. He emphasizes the alarming scale of the problem, referencing Deloitte’s projection that generative AI fraud could become a $40 billion annual challenge by 2027.

In financial contexts, real-time video calls are often used for KYC processes and high-value transactions. However, fraudsters have begun exploiting these mechanisms. For example, a deepfake scam earlier this year involved impersonating a chief financial officer and two deputies on a live Zoom call, resulting in the theft of $25 million. Such incidents highlight the vulnerabilities of existing identity verification systems.

The risks extend beyond financial crimes. Deepfake technology has been used to impersonate CEOs in scams and misinformation campaigns, taking advantage of executives’ publicly available voice and image data. Even high-profile figures, such as U.S. Senator Ben Cardin, have fallen victim to deepfakes, with a recent incident involving a fabricated interaction with a purported Ukrainian official.

Generative AI platforms like Runway and ChatGPT have progressed rapidly, making it increasingly difficult to distinguish between real and synthetic media. Allen illustrates this with examples of advancements in video generation technology, noting how deepfakes have evolved to the point of being indistinguishable from authentic content. While these tools enhance creativity and efficiency, they also empower bad actors to scale social engineering attacks and financial fraud with minimal technical expertise.

Call to action

The speakers shared a call to action: a reevaluation of digital identity verification systems to address vulnerabilities exposed by generative AI. Pidhirsky emphasizes the importance of verifying the “living, present person” behind documents, while Allen stressed the need for enterprises to adopt proactive measures in identifying and mitigating deepfake risks.

“Generative AI isn’t just a future problem, it’s a present reality,” Pidhirsky warns. The webinar advocates for enhanced collaboration between technology providers and stakeholders to secure digital interactions against increasingly sophisticated threats.

Proof, Entrust and TransUnion are also introducing tools to address fraud risks. For example, Entrust launched a cloud-based identity verification service that supports biometric checks, while TransUnion focuses on fraud insights derived from behavioral analytics. Additionally, Proof’s Verify deepfake defense aims to reduce the risk of fraudulent activities such as deepfakes by ensuring that users are legitimate.

Proof’s Verify platform already serves over 7,000 organizations, including financial institutions, healthcare providers, small businesses, and government agencies.

The experts also advocate for continuous monitoring and dynamic risk assessment as key strategies. Technologies that verify identity through video, voice, and behavioral patterns in real time can help thwart attacks while maintaining ease of use for genuine users.

As deepfake technology becomes increasingly accessible, the potential for misuse will likely grow.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics providers and systems evolve or get left behind

Biometrics are allowing people to prove who they are, speeding journeys through airports, and enabling anonymous online proof of age,…

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events