Research raises human rights concerns as facial recognition technology use expands

A Faculty of Law research project has shown the potential impact of facial recognition technology use on human rights.

Girl in blue dress against limestone building
A research project led by researchers at Te Herenga Waka—Victoria University of Wellington has shown the potential impact that use of facial recognition technology (FRT) has on human rights.

The study, led by Associate Professor Nessa Lynch with Dr Marcin Betkier, both from the Faculty of Law, also highlights the current regulation gap in Aotearoa New Zealand. Their report makes 15 recommendations that aim to inform governments how best to manage the risks of the use of FRT.

The research was co-authored with Professor Liz Campbell of Monash University in Australia, and Dr Joe Purshouse of the University of East Anglia in England, and supported by the New Zealand Law Foundation.

The research shows that FRT is increasing in usage in Aotearoa and comparable countries. It is used across sectors including government departments, policing, banking, travel, security, and customer tracking.

“If this regulation gap isn’t plugged soon, the impacts on human rights—such as privacy, freedom of expression, the right to peaceful protest, and the right to be free from discrimination—are potentially extensive,” says Associate Professor Lynch.

The research involved a stocktake of the use of FRT here and in comparable countries, with a focus on use by the state. The spectrum of impact ranges from low-impact uses such as verification at the border, to high-impact activities like live automatic facial recognition from CCTV feeds and controversial apps such as Clearview AI, which is used in policing in other countries and has been trialled in this country.

“We ask there to be an immediate moratorium on live automatic facial recognition by police, due to its impact on individual and societal rights. We also ask for additional oversight of police access to driving licence and passport databases, while a range of recommendations are taken into account,” says Associate Professor Lynch.

The comprehensive range of recommendations includes giving biometric information—such as DNA, fingerprints, iris scans, and facial data—special protection and implementing high-quality privacy impact assessments and algorithm impact assessments.

“We also recommend the government establishes independent oversight of the collection, retention, and comparison of facial images. We also want to see transparency around the sharing of facial images between state agencies, other jurisdictions, and the private sector,” says Associate Professor Lynch.

The full report is now available.