New deepfake tool shows why face alone is no longer proof of identity

JINKUSU CAM is a live deepfake tool that is designed to defeat remote identity verification that is targeting financial institutions and crypto platforms that rely on facial recognition and liveness checks during the Know Your Customer (KYC) onboarding process.
Its emergence underscores how quickly AI-enabled fraud is moving beyond forged documents and stolen selfies into real-time biometric impersonation.
According to VECERT Analyzer, JINKUSU is a “threat actor” which “has evolved from being a niche software developer into a cybercrime-as-a-service operator with an interconnected global infrastructure.
Its business model is based on the sale of biometric identity fraud tools, real-time video injection for KYC evasion, and automated financial asset theft systems.”
The tool is significant because it appears built to manipulate live verification sessions rather than simply generate static fake images. For years, remote identity systems have operated on the assumption that a selfie, a liveness prompt and a facial match together created a meaningful barrier against impersonation.
JINKUSU CAM shows that selfie checks and liveness prompts are not as strong as they once seemed. Those systems work only if the camera is capturing a real person in real time.
If the video or audio reaching the verification system has already been altered by a deepfake tool, the check can be fooled even when the person appears to blink, turn their head, or speak on command.
JINKUSU CAM has been described as using real-time face manipulation and, in some cases, voice manipulation to do exactly that.
VECERT Analyzer said JINKUSU CAM utilizes InsightFace for GPU-accelerated real-time face swapping, allowing for fluid gesture transfer and highly realistic facial movements. The purpose is to alter a live stream in a way that can satisfy the verification checks commonly used in remote KYC onboarding.
Those checks often ask users to blink, turn their head, smile or repeat a phrase. The premise behind them is simple: motion suggests a real, present person.
But, if that motion is synthetically generated or altered before it reaches the verification engine, the prompt itself becomes a much less reliable indicator of authenticity.
And that is what makes this threat more consequential than the appearance of a single new fraud tool. JINKUSU CAM illustrates a broader weakness in modern financial compliance and digital identity systems.
Banks, exchanges and fintech companies have increasingly relied on automated biometric checks because they allow organizations to onboard users quickly, reduce friction and satisfy anti-money laundering and KYC obligations at scale.
But the same automation that makes onboarding more efficient also creates a repeatable attack surface. Once threat actors identify a way to defeat one class of liveness detection or face comparison system, that method can often be adapted across multiple platforms using similar vendors, similar prompts and similar thresholds.
The risk does not stop with crypto exchanges, even if that sector may be an especially attractive early target because of its global reach, high transaction volumes, and long-running fraud pressures.
The same basic threat extends to traditional banks, payment companies, lenders, brokerages, and any institution that depends on remote identity proofing.
A successful impersonation during account opening, account recovery, or transaction authorization can expose an institution to far more than a one-off scam. It can enable synthetic identity fraud, mule account creation, sanctions evasion, first-party fraud, and other activities tied to broader financial crime.
The technical challenge is not just that deepfakes have become more realistic. It is that they can now be delivered in ways that undermine the assumptions behind liveness itself.
Modern deepfake tools can produce facial movement and responsiveness that appear convincing enough to defeat systems that were originally built to catch much simpler spoofing attempts such as printed photos, screen replays, or prerecorded video.
In earlier generations of anti-spoofing, the goal was to determine whether a real face was in front of the camera. Now the harder question is whether the live input itself has been manipulated before the platform ever analyzes it.
That shift helps explain why this threat is becoming harder for institutions to treat as marginal. CaraComp, which provides AI facial comparison and biometric analysis tools, said this week that deepfake injection attacks have jumped 783 percent, a sharp increase that coincides with the continued use of single-factor biometrics in many KYC workflows.
Those findings align with iProov’s Threat Intelligence Report 2026, released Wednesday, which highlights that KYC attempts on Apple devices, which up until recently were relatively immune from injection attacks, are now a target.
Together, those facts point to a widening gap between how advanced the attack methods have become and how limited many identity checks still are. They also emphasize the importance of deepfake and injection attack detection to KYC integrity.
If a platform continues to rely primarily on face matching and motion cues as proof of personhood, it may be placing far too much confidence in biometric outputs generated from an increasingly untrustworthy input stream.
Digital identity systems have long treated the human face as a convenient anchor for trust because it is intuitive, familiar, and easy to collect through a smartphone camera.
But when face-based verification can be mediated through synthetic generation and live manipulation tools, the face becomes only one more layer of data that can be altered.
That does not necessarily mean biometrics have no value. It means they are no longer sufficient on their own, especially when they are captured remotely through consumer devices and internet-connected channels that can be spoofed, injected, or otherwise manipulated.
KYC has always involved a balance between access and risk controls, but tools like JINKUSU CAM suggest that some accepted verification practices may now be more fragile than they appear.
A platform may be able to show that it captured an ID document, matched a face and completed a liveness check, yet still fails to verify that the person behind the session is real and authorized.
Consequently, as synthetic media tools become more accessible and more capable, standards that once looked strong enough for remote assurance may begin to look outdated.
The likely response will be a move toward layered verification rather than reliance on facial biometrics alone. That could include device integrity checks, network risk analysis, document authenticity testing, behavioral monitoring, anomaly detection across sessions, injection detection, and stronger step-up review for higher-risk actions.
The tradeoff is obvious. More layers can mean more friction for legitimate users. But the alternative is a growing mismatch between the sophistication of deepfake-enabled fraud and the simplicity of many current defenses.
JINKUSU CAM matters because it is a sign of where digital identity fraud is heading.
The remote verification model used across finance has depended on the idea that visual presence and responsive motion can stand in for physical trust. But now, deepfake-enabled fraud is eroding that premise.
Article Topics
biometric liveness detection | deepfake detection | identity verification | injection attack detection | iProov | JINKUSU CAM | KYC







Comments