Fraudsters are exploiting generative AI faster than organizations can respond: SAS

In the great AI race of the 2020s, fraudsters have a healthy lead, and they are upping their pace in the U.S. and Canada.
“Cybercriminals don’t have governance committees, and they don’t wait for budget cycles or regulatory clarity – they just act,” says Stu Bradley, senior vice president of risk, fraud and compliance solutions at SAS. The company has just published its 2026 Anti-Fraud Technology Benchmarking Report in collaboration with the Association of Certified Fraud Examiners (ACFE), and the findings show organizations falling behind in their responses to rapidly evolving fraud tactics enabled by generative AI.
“Every quarter business leaders spend evaluating a technology is another quarter lawbreakers get to weaponize it and find organizations underprepared,” Bradley says in a release.
AI is increasing fraud across modalities, and only 7 percent of anti-fraud professionals say their organizations are more than moderately prepared to detect or prevent it. Deepfake social engineering has risen, with 77 percent of respondents to SAS’ survey reporting a slight-to-significant increase. Also rising are consumer fraud/scams, generative AI document fraud and deepfake digital injection attacks to defeat biometric defenses. More than half of respondents expect all of these categories to increase significantly over the next 24 months.
Budgets holding back adequate defenses
Defenses, meanwhile, are slow to respond. A quarter of organizations now use machine learning in their anti-fraud programs. That’s up from 18 percent in 2024 – a relatively slight 7 percent increase. Another 28 percent expect to adopt it by 2028. That means just under half of organizations have no plan in place for how to handle generative AI fraud within the next two years. Budgetary and financial restrictions remain the leading barrier to implementation, cited as a challenge by 84 percent of respondents.
Notably, physical biometrics is now the most widely adopted emerging technology in anti-fraud programs gauged in the study, used by 45 percent of surveyed organizations.
Those organizations that do adopt AI aren’t managing it very well. Just 18 percent of respondents say their organization tests AI models for bias or fairness. And while 82 percent say explainability is important, just 6 percent feel completely confident explaining how their models make anti-fraud decisions.
The core issue is that generative AI lends itself all too well to fraud, and fraudsters don’t play by the same rules as regulated industries. “Physical biometrics, agentic and generative AI – and yes, even quantum AI – the technologies transforming the war on fraud are maturing rapidly,” says the report. “But fraudsters’ readiness to exploit them is advancing in parallel, and bad actors have a tremendous advantage.”
Regionally, Canada and the U.S. have seen the most significant increases, and the trend is expected to continue.
Article Topics
biometrics | Canada | cybersecurity | deepfakes | generative AI | SAS | United States






Comments