FB pixel

Cybersecurity pro demonstrates IAD importance for facial recognition at RSAC 2026

Specialist tests deepfake vulnerability to stunning effect
Cybersecurity pro demonstrates IAD importance for facial recognition at RSAC 2026
 

Jake Moore believes the identity stack is broken. But instead of just willing his belief into reality, the global cybersecurity advisor for ESET demonstrated why his assertion has truth.

At the RSAC 2026 Conference in San Francisco, Moore delivered an entertaining and enlightening session titled “Facing Reality: Hacking Facial Recognition” that illuminated how AI tools are affecting security. His demonstration clearly illustrated the importance of biometric Presentation Attack Detection (PAD) and Injection Attack Detection (IAD).

He did this by conducting tests and experiments. Opening a bank account using an AI-enabled false identity, for example, or tricking a facial recognition system into believing that he is Tom Cruise. He conducted these tests — with the aid of a lawyer friend to avoid legal trouble – on institutions that the general public interacts with.

In Moore’s opinion, the market has adopted facial recognition technology “a little bit too early.” He wanted to make a point about how exposed people have become in an era of affordable facial recognition and off‑the‑shelf AI tools. Prior to ESET, Moore worked for 14 years in the UK police’s Digital Forensics and Cyber Crime Unit.

Experimenting with bank accounts, live CCTV feeds, with Amy’s legal help

So, with the blessing of a lawyer friend named “Amy,” he ran a simple experiment. He uploaded Amy’s face into PimEyes — a public facial‑search engine — to see where her image appeared online. The results, he told the audience, were a reminder of how easily a stranger could gather material for social engineering attacks without technical know-how.

From there, Moore escalated the demonstration. He bought a pair of Meta Ray‑Ban smart glasses, the kind with a built-in camera and real‑time information features. Then he paired them with a colleague running Corsight, a commercial facial recognition system.

As he walked around his office, the colleague fed information back to him through the glasses visually and orally. Nothing he did required hacking. He relied on consumer hardware and enterprise software as they were designed.

Moore referenced Facewatch — another system used in retail environments — to indicate how widespread these tools have become. But can facial recognition be hacked? Moore talked about “face fraud factories,” which are identity manipulation services. Crucially, these aren’t even always on the dark web but can be found on the open web.

To illustrate the risk, he talked through an experiment in which he created a doctored ID using a commercially available image tool, then used an AI tool to inject a video to pass a biometric liveness check. Using the fabricated identity, he was able to open a bank account, which he immediately closed and disclosed to the institution.

Moore then explored the opposite scenario: what if someone wanted to add themselves to a watchlist? He visited Waterloo Station in London to speak with security staff about running a controlled test. Using Corsight, he asked whether they could identify him in real time as he walked through the station and alert guards. The idea was to show how easily surveillance systems could be manipulated — or overwhelmed.

In the second stage of the demonstration, he used face‑swapping software to alter his appearance in a live feed. To human operators, the CCTV looked normal. But to the facial recognition system, he appeared as someone else entirely — in his example, as Tom Cruise. The system failed to recognize him as Jake Moore.

Moore showed the tools that enable identity manipulation, impersonation and evasion are accessible, inexpensive and increasingly sophisticated. The message was that organizations that rely on facial recognition or automated identity checks need to understand both the power and the fragility of the systems they deploy.

The cybersecurity specialist revealed that new research he is in the process of conducting is on deepfake calls, with a preview that “it can really fool people.”

Moore’s prediction and solutions in a rapidly evolving environment

Moore warned that current identity verification systems are not prepared for the threat.

Most systems still assume that a camera feed is real, even though deepfakes can now inject synthetic video into trusted channels. Basic active liveness challenges — blinking, head turns — no longer offer meaningful protection, and people continue to trust what they see on screen.

Moore noted that deepfakes still have weaknesses such as rapid movement, extreme lighting, side angles, audio drift and multi‑camera verification. Detection tools are improving, using AI to spot artefacts, compression issues and cloned‑voice patterns, but they remain imperfect and prone to false positives.

He argued that organizations must move beyond visual checks and adopt multi‑signal identity verification, combining device trust, location, behavior and hardware‑backed credentials like passkeys. Challenge‑based verification can also strengthen resilience.

Deepfake tools are improving weekly, are cheap and widely available, and will scale identity attacks dramatically. Future wearables may include cameras, making continuous identity verification essential. Moore described the current identity stack as broken: faces and voices are easily spoofed because most systems rely on weak liveness checks, behaviour can be modelled, while device signals and cryptography remain the strongest anchors of trust.

His guidance was to assume video identity can be manipulated. As oft-repeated in the biometrics community, do not rely on facial recognition alone. Use multiple signals, test systems against synthetic media and train staff to verify identity through independent channels.

His prediction: seeing is no longer believing — and identity systems must evolve fast.

Related Posts

Article Topics

 |   |   |   |   |   |   |   |   |   |   |   | 

Latest Biometrics News

 

White House fraud crackdown sharpens focus on digital identity

The Trump administration’s March 6 Executive Order 14390, aimed at combating cybercrime and fraud, has prompted a significant response from…

 

Gender gaps threaten progress on global legal identity goals, Vital Strategies CEO warns

As countries work toward universal legal identity under SDG 16.9, greater focus on gender inclusion is needed to ensure women and…

 

Guyana data chief says digital ID won’t replace voter ID

Guyana’s Data Protection Commissioner, Aneal Giddings, has clarified that the country’s national digital ID is not intended to be used…

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events