FB pixel

Accelerated AI adoption outpacing trust: Thales 2026 Digital Trust Index

The annual survey finds visible security earns trust, but friction in access drives delay
Accelerated AI adoption outpacing trust: Thales 2026 Digital Trust Index
 

Thales’ 2026 Digital Trust Index shows an industry in transition, as AI technology continues to disrupt legacy approaches to identity verification, onboarding, cybersecurity and fraud prevention. A new layer of uncertainty has been added to the stack, as users face the challenge of figuring out whether they’re interacting with a human or an AI agent, and whether or not to trust its intentions are good.

The company’s comprehensive global study of digital trust is based on a survey of more than 15,000 consumers, business partners, and IT decision makers across 13 industries. It finds the most trouble occurring during sign-up or login, when digital trust is won or lost.

Here we see the imbalance between AI deployment and the wider infrastructural context in action. As IT leaders feel pressure from above to push AI on their employees, 93 percent of IT leaders now say they are using, deploying or planning AI initiatives. Meanwhile, only 23 percent of consumers say they trust companies to use AI responsibly with their data, and 77 percent worry about AI agents acting on their behalf online.

The issue is a mirror of the wider dispersion of AI into society: accelerated adoption and deployment against a deep distrust that translates to scant demand.

“When AI simply helps people work faster, confidence is high,” says Danny DeVreeze, vice president of Identity and Access Management (IAM) at Thales. “But when AI starts acting autonomously and making decisions or interacting with systems on a user’s behalf, people begin asking harder questions about security, control and accountability.”

The standoff continues between security, privacy and user experience. People are unwilling to jump into AI blind, and many prefer a more cautious approach. Forty-five percent of respondents say they prefer stronger security checks, even if sign-ups take longer. Just 22 percent favor faster access with weaker safeguards. The ravenous data gobbling of the past twenty years has made people suspicious, and clarity is elusive; only 16 percent say they fully understand how companies collect and use their personal data.

A key finding is that people want to see that they are being protected. Visible security, says the report, earns trust. That said, friction at the access layer is driving delay and credential sharing: sixty-six percent admit to sharing or borrowing credentials because of slow provisioning.

People still trust banks

Banking remains the standard for digital trust when sharing personal data. Fifty-seven percent of respondents trust the bank, while most other sectors continue to operate in a trust deficit.

Government services rank second at 40 percent, followed by healthcare at 35 percent. Beyond that, trust is below 25 percent. Notable for the automotive industry is a dismal 3 percent trust rate for the collection of data – a clear sign that people do not necessarily want smarter cars.

Also notable for biometrics observers is how strict, established regulations in the financial sector have built a strong foundation of trust.

Passkeys win trust, but deployment gap remains

Another winner in the report is passkeys. Sixty-eight percent say passkeys increase trust, demonstrating the widespread adoption of the technology as an alternative to passwords and traditional multi-factor authentication. Yet there is still room to grow: “eighty-seven percent say offering passkeys is important, yet only 49 percent currently do so. This gap represents both risk and opportunity as consumers expect stronger, seamless security.”

Overall, the picture is of a trust landscape that has become much more granular, as new threats target the complete pipeline. “Trust today is no longer determined by reputation alone,” says the report. “It is tested at login, challenged during onboarding, and reinforced every time a user is asked to share their data.”

“The future of digital trust depends on aligning operational reality with user expectation,” it says in conclusion. “The organizations that design access as a trust-building mechanism will be best positioned to compete. This means improving reliability, balancing friction against risk, explaining data use clearly, modernizing authentication with user-friendly framing, and embedding meaningful control into the account experience.”

Malicious bots headline LexisNexis Cybercrime Report

LexisNexis Risk Solutions’ latest Cybercrime Report looks at fraud from a different angle, but identifies the same culprits. According to a release, the report shows an 8 percent rise in global fraud rates driven by “attacks targeting the gaming and gambling and ecommerce sectors, cost of living pressures and new emerging fraud tactics.”

Synthetic identities, agentic attacks, and more nuanced scams are all ongoing problems. “More than one in ten frauds now involve a synthetic identity, representing an eight-fold global increase year on year and making it the fastest growing fraud type globally.”

In 2025, malicious bot attacks saw a significant 59 percent rise. The report says “bots can now mimic genuine human actions, such as how we move a cursor around a login screen, with a high degree of plausibility to fool the latest behavioural fraud detection tools.”

That said, first party customer fraud remains the leading source of fraud globally for the second year running, comprising almost two in five reported frauds. However, results differ by region, and new attack surfaces are enabling new tactics.

“Cybercriminals are experimenting with the same technologies that are transforming digital commerce and organizations must prepare for a future where both legitimate users and malicious actors rely on automated agents to interact online,” says Stephen Topliss, vice president of fraud and identity at LexisNexis Risk Solutions.

“Those that succeed must be able to confidently distinguish between humans, bots and agents as well as determining intent. Organisations that share risk intelligence are best positioned to protect consumers and build trust in the digital economy.”

Related Posts

Article Topics

 |   |   |   |   |   |   | 

Latest Biometrics News

 

White House fraud crackdown sharpens focus on digital identity

The Trump administration’s March 6 Executive Order 14390, aimed at combating cybercrime and fraud, has prompted a significant response from…

 

Gender gaps threaten progress on global legal identity goals, Vital Strategies CEO warns

As countries work toward universal legal identity under SDG 16.9, greater focus on gender inclusion is needed to ensure women and…

 

Guyana data chief says digital ID won’t replace voter ID

Guyana’s Data Protection Commissioner, Aneal Giddings, has clarified that the country’s national digital ID is not intended to be used…

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events