Shufti warns that lab‑tested deepfake tools are failing in the real world

Shufti is warning that financial institutions are facing a far larger deepfake problem than they realize, as synthetic identity fraud is projected to surge to $58.3 billion by 2030.
The company says many banks and fintechs rely on deepfake detection systems that test well in the lab but fail under the messy conditions of real‑world KYC checks. ”The industry needs to stop treating lab accuracy as deployment readiness,” Frayam Asif, CTO at Shufti, claims.
“The conditions under which we verify identity bear almost no resemblance to the conditions under which we test for fraud.”
Juniper Research forecasts a 153 percent rise in synthetic identity fraud losses between 2025 and 2030. This is driven by genAI tools that now produce manipulated media capable of surviving compression, re‑encoding and low‑quality device capture.
Yet most detection models are trained on clean, predictable datasets. These bear little resemblance to the bandwidth‑limited, device‑variable environments in which identity verification actually occurs, Shufti argues.
Fine textures and natural sensor noise, the signals many models depend on, are often stripped away before analysis, creating a growing gap between headline accuracy scores and real‑world performance.
For financial institutions this leads to rising manual review queues, higher false‑negative rates, increased operational costs and growing regulatory exposure. Customer experience may degrade as verification thresholds are tightened to compensate for unreliable detection. Shufti describes this as a growing “trust tax” embedded into every digital transaction.
The company argues that the industry needs a structural shift in how deepfake risk is addressed. Detection must be validated under real‑world conditions rather than controlled benchmarks. Single forensic cues shouldn’t be ultimate given how easily compression distorts them.
Liveness checks must evolve to handle both live capture and uploaded flows without adding unnecessary friction, and models require continuous adaptation. Deepfake techniques are evolving too fast for traditional deployment cycles.
Shufti has developed a production‑calibrated deepfake detection architecture built around a multi‑layer forensic model known as the Seven Gates Framework. Seven independent authenticity hypotheses, including biometric structure and compression history, must align before deepfake risk is flagged.
The company says the approach is designed to sustain performance under media degradation rather than rely on any single cue that may collapse in real‑world environments. “The gap we see most often is not in detection capability itself, but in how that capability is validated,” says Shahid Hanif, CEO of Shufti.
“When detection is tested only under controlled conditions, organizations carry risks they cannot see. Our focus is on building systems that remain effective where it actually matters: under the compression, diversity, and unpredictability of live verification environments.”
With deepfake‑enabled fraud posing a systemic risk, fintech and payments leaders are facing strategic questions on the true cost of friction and whether vendors are being evaluated on real‑world performance. Shufti has published a technical whitepaper outlining its approach and offering a framework for evaluating verification systems under real‑world conditions.
Bots and AI part of global fraud rise, report finds
LexisNexis Risk Solutions’ latest Cybercrime Report shows global fraud continuing to rise, with rapid digital adoption and increasingly sophisticated attack methods fueling the surge.
Based on over 116 billion transactions analyzed in 2025, global fraud rates climbed 8 percent, with gaming, gambling and ecommerce hit hardest.
“Increasingly, attackers rely on advanced bots and AI-driven tools to mimic human behavior and test defences with unprecedented speed and accuracy,” said Stephen Topliss, VP of fraud and identity at LexisNexis.
The report highlights first‑party fraud as the dominant global threat for the second year in a row, making up 38.3 percent of cases. Trends vary by region. EMEA sees more than half of all fraud (51.7 percent) coming from first‑party abuse, while Latin America faces a surge in synthetic identity fraud, which accounts for nearly half of all cases (48.3 percent) there.
Synthetic fraud is now the fastest‑growing fraud type worldwide. It represents 11 percent of all fraud — an eight‑fold increase year over year — as criminals build long‑term fake identities stitched from stolen data, often without an immediate victim to raise the alarm.
The report also tracks a dramatic rise in automated activity. Automated agentic traffic grew 450 percent in 2025, especially around credit card payments and gaming logins. Meanwhile, malicious bots became far more sophisticated, with attacks rising 59 percent as criminals deployed tools capable of mimicking human cursor movements and behavioural patterns.
Ecommerce and online betting platforms saw some of the steepest increases in attacks. Ecommerce fraud rose 64 percent year over year, while login‑based account takeover attempts jumped 216 percent. Gaming and gambling sites experienced a 76 percent rise in attacks.
Regional trends show North America facing periodic ecommerce spikes. EMEA saw a 27 percent rise in attacks driven by account takeover attempts; APAC experiencing increased desktop‑based automation attacks; and LATAM grappling with widespread synthetic identity fraud.
LexisNexis warns that cybercriminals are rapidly adopting the same AI and automation technologies transforming digital commerce. The report stresses that cross‑industry collaboration and shared intelligence remain essential to protecting consumers and maintaining trust in the digital economy. The LexisNexis Risk Solutions Cybercrime Report: Evolving Threats Beneath the Surface can be read here.
Article Topics
AI fraud | deepfake detection | deepfakes | financial services | LexisNexis | Shufti







Comments