Reality Defender integrates deepfake detection into Charm Security agentic workforce

Reality Defender has formed a strategic partnership with Charm Security, a New York-based company that “builds the Agentic AI Workforce for scams, fraud, and cybercrime prevention and resolution.”
A release says the partnership integrates Reality Defender’s deepfake detection capabilities directly into Charm’s Agentic Workforce across voice, image and text, delivered by Charm’s AI agents. The intended result is more certainty, faster.
“Deepfakes are fundamentally changing enterprise threats by attacking trust itself,” says Ben Colman, CEO of Reality Defender. “Teams leveraging Charm Security’s Agentic Workforce can now identify manipulated communications in real time and respond before harm occurs.”
Charm, a 2026 RSAC Innovation Sandbox company, deploys its proprietary Human Vulnerabilities & Exploits (HVE) model to power fraud prevention agents that can deliver real-time insight into human intent, manipulation, coercion and deception.
“Fraud and cybercrime are human attacks from the very first signal, increasingly powered by AI-generated content,” says Roy Zur, CEO of Charm Security. “By integrating Reality Defender’s deepfake detection directly into Charm’s AI agents, we’re helping our customers resolve cases better and faster, without adding friction or complexity for professional teams.”
No agentic traffic in data is probably a visibility issue
Reality Defender has been responding to the rise of agentic threats, in keeping with its philosophy of prioritizing multiple detection models rather than a single scoring system. A blog on the company’s website draws on a conversation with Matt Smallman of SymNex Consulting and Reality Defender Chief Customer Officer Brian Levin, to look at the different types of agentic threats facing contact centers.
Various agents can be malicious, parasitic or so-called “shadow agents,” which are agents deployed and authorized by the customer to work against a business’ financial interests: “the customer authorized them. But they’re operating outside your security perimeter, often holding your customer’s credentials, and are entirely indifferent to your cost model.”
Death by a billion AI agents
Reality Defender’s data says the real threat is coming from high-volume synthetic bot traffic.
“There’s a lot of noise in the market about high-profile deepfake attacks,” the firm says. “These are real, and they matter. But for the majority of contact center operations, the more immediate risk is high-volume, generic synthetic bots hitting IVRs at scale.”
The takeaway is, “if you don’t know that agentic AI is in your call traffic, you can’t do anything about it.”
“The starting point is assuming it’s already happening. If you’re not seeing agentic traffic in your current data, that’s more likely a visibility gap than an absence of the problem.” The solution is “caller-type signal: a reliable way to classify inbound traffic as human or synthetic before routing, authentication, or escalation decisions are made.”
Article Topics
AI agents | AI fraud | biometric authentication | biometrics | deepfake detection | deepfakes | fraud prevention | Reality Defender | RSAC 2026 | voice biometrics







Comments