FB pixel

From hiring to management, deepfake scams target remote workplace

Deepfake detection firms building bulwarks
From hiring to management, deepfake scams target remote workplace
 

Companies large and small are becoming increasingly targeted by AI-generated personas such as fake job applicants and scammers using deepfake voice and videos to impersonate executives. To fight off the influx of fraudsters, deepfake detection firms are launching new products – and attracting funding from well-known investors.

Fake employees and fake executives

Companies have been receiving more AI-generated profiles posing as job applicants with research firm Gartner estimating that one in four candidates will be fake by 2028. Among them, cybersecurity and cryptocurrency are becoming a popular target as they often provide remote roles.

The risks from hiring fake job seekers can vary from installing malware to demanding ransom, stealing data and trade secrets to accessing funds. But scams may also involve political risks.

Last year, the U.S. Justice Department stated that more than 300 U.S. firms inadvertently hired impostors with ties to North Korea for IT work. The fake employees industry is currently broadening to include criminal groups from Russia, China, Malaysia and South Korea, according to CNBC.

The scammers rely on fake photo IDs and employment histories to score interviews. This is why companies such as CAT Labs are now relying on identity verification tools to weed out candidates, says the digital asset recovery platform’s CEO and founder Lili Infante.

The trend of fake executives has also not subsided.

Last week, an AI impersonation case made headlines after a Singapore-based multinational corporation almost lost US$499,000. A person posing to be the company’s chief financial officer tricked the company’s finance director into sending the funds after reaching out to them through the messaging app WhatsApp and setting up a Zoom call conference with the scammers. The fraudsters relied on deepfake technology.

The funds were retrieved after the finance director alerted the Singapore Police Force. The law enforcement agency worked with Hong Kong’s authorities to recover the funds deposited into a money mule account of the scammers, police said in a statement.

Last year, a Hong Kong-based company was duped into sending US$25 million in a similar scam involving a deepfake video.

Companies building bulwarks against deepfake threats

Companies have been trying to come up with solutions that can prevent these scenarios.

Reality Defender introduced a new tool last year designed to integrate into enterprise video conferencing platforms. The real-time video analysis identifies subtle artifacts and inconsistencies that human observers cannot detect reliably at scale, says the company which specializes in securing communication channels against deepfake impersonations.

Currently, security professionals focus on spotting lighting inconsistencies, unnatural eye movements, audio-visual synchronization problems, and boundary detection issues such as those shown during a “hand test” in which a job candidate is asked to place a hand in front of their face.

Organizations also monitor behavioral markers such as delayed responses to unexpected questions, mechanical speech patterns, and contextual comprehension gaps. They introduce technical verification such as analyzing IP addresses or requesting job applicants to change the platform mid-interview, Reality Defender’s co-founder and CEO Ben Colman explains in a blog post.

“What appears detectable to the human eye today will likely become imperceptible tomorrow as generative AI systems continue their rapid evolution,” says Colman. “The above manual detection methods may seem somewhat effective now, but are progressively becoming less reliable as deepfake technology improves its ability to replicate natural human movements, expressions, and environmental interactions.”

Pindrop, backed by Andreessen Horowitz and Citi Ventures, which focuses on detecting fraud in voice interactions, may soon pivot to video authentication, according to the CNBC report.

Investments are also coming into new products. Adaptive Security, another company focused on fighting AI security risks, announced a US$43 million investment round at the beginning of April. The investment was led by Andreessen Horowitz (a16z) and the OpenAI Startup Fund.

The startup is focusing on social engineering attacks boosted by AI. It simulates an attack by an AI-generated impersonator using voice, SMS and email to test employee readiness to perform funds transfers to a scammer. It also offers real-time message analysis and risk scoring.

“Attackers can generate realistic AI personas – deepfake versions of your coworkers, your CEO, even you – in seconds. These personas can make phone calls, send emails, or text your team using AI-generated content that sounds exactly right. They’re built on top of open-source LLMs, trained on public databases, and fine-tuned to fool your defenses.”

Meanwhile, more firms are looking towards this market. Boston-based startup Modulate introduced a conversational AI tool last week called VoiceVault that is designed to detect and prevent fraud and scams across voice channels in real time. Its AI model interprets conversational context, tone, emotional cues and intent, according to the firm.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics providers and systems evolve or get left behind

Biometrics are allowing people to prove who they are, speeding journeys through airports, and enabling anonymous online proof of age,…

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events