FB pixel

Trust in the age of agentic AI: insights from Pindrop, Anonybit, Validsoft

Already at crisis levels, fraud could scale exponentially unless AI agents can be verified
Trust in the age of agentic AI: insights from Pindrop, Anonybit, Validsoft
 

One in every 599 calls is fraud, according to a recent webinar from Pindrop – which might explain why we’re all getting so many calls from the tax authorities, or the police, or someone who can offer you a really great deal on publishing a book you have already published.

By now, it’s not news that fraud, and deepfake fraud in particular, is skyrocketing thanks to the proliferation of cheap, easily accessible generative AI tools. Still it’s worth considering the numbers to get a sense of scale: Pindrop expects a 162 percent deepfake fraud increase in 2025.

And it’s important to note that fraud techniques continue to evolve at pace. Agentic AI is the latest tool to find a home in the fraudster’s belt, enabling machines to sound human, act autonomously and scale impersonation fraud attacks in unprecedented ways.

The webinar – the first of a four-part series – is entitled “What Agentic AI Means for the Future of Fraud,” and features Pindrop VP of Deepfake Detection Amit Gupta and Director, Research & Dev Mo Merchant in conversation.

“More fraudsters are trying to automate their attacks,” says Gupta, noting a spike in biometric injection attacks. He says fraud rings have typically had to employ hundreds of individuals to work with credentials procured on the dark web, to identify which accounts are worth a full-scale attack. “Now, fraudsters in the garage can do the same damage. We are hearing calls in which account balance inquiries, or even providing credentials is now done with synthetic voices.”

Bots will try as many times as they’re allowed, and lax authentication systems will allow them to cycle through heaps of stolen credentials and phish for the one that works. Quality of automated attacks is improving, too. There are more and better machine learning models, notably for accurately cloning or synthesizing human voices.

Merchant says it’s easy to find more than 2400 text-to-speech engines with a simple search, and that’s only the beginning. “Now fraudsters and social engineers, they can impersonate anyone – executives, colleagues, loved ones. Anyone with a browser and an internet connection can make this happen.”

Agentic AI is the next evolution.

“We’ve come to the point where things are now real time,” Merchant says. “We’re talking about interactive deepfakes. Conversational fluency is there. The human aspect is present – so the agent on the other end feels like it’s someone they can trust.”

Conversational agents have a more comprehensive memory, agents can coordinate and collaborate on attacks – because they’re all working toward the same goal: find a breach.

Gupta says that, “essentially what you should now assume is that all real time communication is at risk from deepfakes.” It’s not just straight-up financial fraud, either; some fraudsters have identified hiring as a new vector for gaining access to company resources – which is to say, people are hiring deepfaked AI avatars.

Validsoft underlines necessity of real-time biometric authentication for voice

Validsoft takes a parallel look at the same core question: as AI systems become more capable of executing tasks, “who is telling the AI what to do?”

A post from the company focuses on “Why the Voice Channel Matters.” The firm makes similar points to Pindrop, noting that “AI agents are being increasingly deployed in voice-driven environments, from call centers and smart devices to enterprise virtual assistants. These environments are particularly vulnerable to voice spoofing and deepfake audio attacks, especially as synthetic voice technologies become increasingly sophisticated and accessible.”

Real-time deepfake audio detection technology is increasingly necessary to know that whoever you’re talking to isn’t a scam. “Without identity assurance, the risk of unauthorized or malicious instructions increases, compromising data, eroding trust, and exposing organizations to regulatory and reputational risk.”

Anonybit CEO envisions ‘Circle of Identity’ based on biometrics

Anonybit also has thoughts on agentic AI. Company CEO Frances Zelazny has published a blog with the intriguing title, “The Rise of Agentic AI: When Machines Take the Lead, Who Do They Become?”

“These systems don’t just provide answers; they take actions,” she says. “They schedule meetings, negotiate contracts, approve transactions, even write and deploy software updates – often without human intervention. They are not just assistants; they are actors in the digital ecosystem.”

Financial firms are among those to embrace agents, deploying AI-powered chatbots and virtual assistants to offer personalized financial advice based on individual data. Zelazny singles out the firm Klarna, which “seems to have taken it one step further, announcing an advanced AI assistant that handles customer payments, refunds and other payment escalations.” For Anoybit’s CEO, the case is interesting because “it is touching payments, which ultimately ties into identity management and my question – when AI acts on our behalf, how do we ensure it isn’t manipulated, corrupted, or impersonated?”

It’s the same theme underlying most discussions about identity: trust matters. Since we are already facing a fraud crisis, building trust mechanisms into AI agents is necessary to avoid an amplification of that crisis into unprecedented scale.

In this new world, not only must we be able to trust the agentic machines; we have to ensure they trust each other – so-called machine-to-machine authentication, widely used in IoT networks, enterprise systems, and automated business processes. AI agents must authenticate dynamically as they make independent decisions and execute transactions.

“In other words, unlike traditional authentication, which relies on human verification, machine-to-machine authentication requires cryptographic mechanisms that allow AI agents to confirm each other’s identities before executing transactions or exchanging data. Without this assurance, AI-to-AI interactions become a massive attack surface for fraud, impersonation, and unauthorized transactions.”

Among solutions, Zelazney looks at Verifiable credentials (VCs), but says that, while they add a layer of trust, they are primarily linked to devices rather than individuals, which introduces the possibility that credentials could be bound to invading agents. “To build true security for agentic AI, we need to think beyond credentials and toward biometric-bound identity systems.”

Zelazney has coined the term The Circle of Identity to describe a framework designed to provide continuous trust across the digital identity lifecycle, leveraging biometrics that are registered and bound to any credential, device, token or other asset. In the context of agentic AI, that means an individual can provide an encrypted digital signature authenticated with biometrics to link to an AI agent, authorizing it to act on their behalf.

“Every action taken by the AI is cryptographically linked to the original human approver through a dynamic, time-bound biometric signature. Each transaction or action generates a unique token – meaning that even if an attacker intercepts one, they cannot reuse it for another transaction.

The biometric remains consistent, but the identity token evolves. This ensures that even if an AI agent is operating autonomously, its actions remain traceable and verifiable, and any compromise can be quickly mitigated.”

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

Biometrics providers and systems evolve or get left behind

Biometrics are allowing people to prove who they are, speeding journeys through airports, and enabling anonymous online proof of age,…

 

Findynet funding development of six digital wallet solutions

Finnish public-private cooperative Findynet has announced it will award 60,000 euros (US$69,200) to six digital wallet vendors to help translate…

 

Patchwork of age check, online safety legislation grows across US

As the U.S. waits for the Supreme Court’s opinion on the Texas case of Paxton v. Free Speech Coalition, which…

 

AVPA laud findings from age assurance tech trial

The Age Verification Providers Association (AVPA), and several of its members, have welcomed the publication of preliminary findings from the…

 

Sri Lanka to launch govt API policies and guidelines

Sri Lanka’s government, in the wake of its digital economy drive, is gearing up to release application programming interface (API)…

 

Netherlands’ asylum seeker ID cards from Idemia use vertical ICAO format

The Netherlands will introduce new identity documents for asylum seekers Idemia Smart Identity, compliant with the ICAO specification for vertical…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events