FB pixel

Just because they can – the biometric conundrum for law enforcement

Just because they can – the biometric conundrum for law enforcement
 

By Professor Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner

They could have done more but didn’t’. That’s an unpolished takeaway from the two recent cases – one in New Mexico, the other in California – where jurors found technology companies had been negligent in managing the potential risk from their products.

Protecting people, especially vulnerable people, from abuse and exploitation via everyday technology is rarely out of the headlines as governments wrestle with its regulation.

These landmark cases are essentially about where liability should fall when technology leads to avoidable harm to the user. But what happens where technology that might have prevented that same harm to vulnerable people was available but wasn’t used? For lawyers this is an interesting area; for law enforcement it presents a real dilemma.

In a recent BBC programme looking at the use case for live facial recognition (LFR) on the UK’s rail network, I asked the question: if the police or train companies could have used the technology to find a child who’d been groomed online and was on their way to meet their abuser but didn’t, would we forgive them? This is not a rhetorical exercise and should be at the centre of police policy considerations about AI-enabled technology. Privacy advocates like to say “just because we can doesn’t mean we must” but, with some biometric systems, that may not be quite so clear anymore.

In many countries, the impact of new technologies is being tested against the existing rules that govern, not just policing and security, but all manner of public functions. AI tools and their developers are not exempt from the general duties applicable to commercial providers and, although the US verdicts are being appealed against, they give us an idea of how liability may be extended to online products and services. The Meta and YouTube cases are an example in the social media setting and other so-called bellwether lawsuits will be queuing up along the Valley. In the context of law enforcement, the police in England and Wales owe no specific duty of care to the population at large but how far autonomous systems acting as agents might disturb that settled situation in the future is unknown. As with so many applications of AI-powered capability, there is often no precedent to apply. We haven’t been here before.

While private companies address their obligations to consumers, technological capability is bringing national challenges for governments. All countries that are signatories to the European Convention on Human Rights (ECHR) must consider how new AI-driven biometrics like facial recognition technology (FRT) will affect long established public law principles – and vice versa. Relevant policy should cover the state’s use of intrusive technology against the citizen and, at the same time, identify any new obligations that the technology has created.

In a case against the Metropolitan Police (DSD and Another) the UK Supreme Court found the police liable for failing to protect citizens from degrading or inhumane treatment by an offender whom they were investigating for serious sexual offences. The court held that the police had a positive obligation to conduct an ‘effective investigation’ of crimes resulting in such treatment. The basis for liability was partly the ‘means’ that had been available to the police by which they might reasonably have prevented the harm to subsequent victims. DSD was not a technology case but new biometric capability has two features that make it relevant: reliability and affordability. Remote biometric capability like FRT is an effective and inexpensive tool that can be used at scale. As such, it has materially changed the parameters of what an ‘effective investigation’ looks like – and with it the legitimate expectations of the citizen.

While concerns around their use of some technology persist, failing to take action to avoid foreseeable harm presents the police with not only a legal risk, but increasingly a moral one. Where they have access to approved technology that could have prevented significant harm to a vulnerable person or a community, the police risk a public verdict that they too could have done more but didn’t. And the more they insist that this biometric technology is a necessary component of modern policing, the greater that risk becomes.

As AI tools become more embedded, it will be important not to conflate arguments that the technology is itself flawed or unlawful with the way in which the police decide to use, or not to use it on a particular occasion. It’s been over five years since the Court of Appeal determined that the use of FRT by the police in England and Wales had not been a breach of their duty to act ‘in accordance with law’– how long will it be before a court is asked to determine the obverse question?

Police practice must reflect the relevant law (such as the EU AI Act which may render its use unlawful ab initio) reinforced by official guidance. Their challenge comes from the gaps in that law and the paucity of such guidance. Litigation must be one of the most unpredictable ways of shaping policy and does little for public confidence; a specific legal and regulatory framework is needed.

Beyond policing, biometric capability for ID verification, age estimation, location tracking and prediction will bring other public services like health, education and social care into the discussions about preventing foreseeable harm. This will become acutely apparent with the next global pandemic.

As we pore over the social media judgments and debate the wider moral issues of technology’s place in our lives, we might reflect on the “could have done more but didn’t” theme. Soon enough, governments will have to provide a defence to public indictment for not having used available AI-enabled technology to prevent harm. For the police, advanced biometric capability creates new options and brings new expectations. Does that mean because they can do something, they must? In some circumstances it just might.

About the author

Fraser Sampson, former UK Biometrics & Surveillance Camera Commissioner, is Professor of Governance and National Security at CENTRIC (Centre for Excellence in Terrorism, Resilience, Intelligence & Organised Crime Research) and a non-executive director at Facewatch.

Related Posts

Article Topics

 |   |   |   |   | 

Latest Biometrics News

 

White House fraud crackdown sharpens focus on digital identity

The Trump administration’s March 6 Executive Order 14390, aimed at combating cybercrime and fraud, has prompted a significant response from…

 

Gender gaps threaten progress on global legal identity goals, Vital Strategies CEO warns

As countries work toward universal legal identity under SDG 16.9, greater focus on gender inclusion is needed to ensure women and…

 

Guyana data chief says digital ID won’t replace voter ID

Guyana’s Data Protection Commissioner, Aneal Giddings, has clarified that the country’s national digital ID is not intended to be used…

 

Biometrics at scale: EES setbacks meet growth push

The effectiveness of biometrics deployments at scale can be prone to failures of procedure or coordination, as travelers to Europe…

 

Concordium’s Boris Bohrer-Bilowitzki wants to keep your AI agents in line

“Without identity, autonomous action is just autonomous risk.” So says Boris Bohrer-Bilowitzki, CEO of Layer-1 blockchain protocol Concordium. Concordium has…

 

Veratad among first certified to ISO 27566 age assurance standard

Veratad is one of the first companies worldwide to achieve certification to ISO/IEC 27566‑1:2025, the newly established international standard for…

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Biometric Market Analysis and Buyer's Guides

Most Viewed This Week

Featured Company

Biometrics Insight, Opinion

Digital ID In-Depth

Biometrics White Papers

Biometrics Events