As part of research supported by the Law Foundation’s Information Law and Policy project, Nessa, alongside Professor Liz Campbell from Monash University in Australia, Dr Joe Purshouse from the University of East Anglia in the United Kingdom, and Te Herenga Waka’s Dr Marcin Betkier, looked into how facial-recognition technology is currently used in Aotearoa New Zealand and the rights and interests that it might affect.

“The collaborative nature of this project enabled me to work with others to see what was happening in other jurisdictions and what regulations exist, so we can map what is happening across the world,” says Nessa.

She has been interested in the legal and ethical issues around biometrics (personal characteristics that can be used for identification) for about a decade, starting with DNA and then moving to facial recognition.

“Generally, if someone wants to collect a biometric from you—a fingerprint, for example—it is with your consent, or at least your knowledge.

“Your facial image is public and can be collected at a distance without your knowledge or consent. Nonetheless, it still represents an intrusion of privacy.”

As part of this research, a stocktake was undertaken that revealed the spectrum of impact and use of facial-recognition technology globally. This ranged from low-risk uses such as verification at a country’s border to high-risk activities like live automatic facial recognition from closed-circuit television feeds and controversial apps such as Clearview AI.

pixelated portraits
“We also recommend independent oversight of the collection, retention, and comparison of facial images and greater transparency around the sharing of facial images between state agencies, other jurisdictions, and the private sector.”
Associate Professor Nessa Lynch

The use of facial-recognition technology is increasing in New Zealand and comparable countries across sectors such as banking, customer tracking, travel, and security. Law and regulation has not kept up with the pace of implementation.

“If this regulation gap isn’t addressed soon, the impacts on human rights—such as privacy, freedom of expression, the right to peaceful protest, and the right to be free from discrimination—are potentially extensive,” says Nessa.

The final report makes 15 recommendations that aim to inform government agencies and the private sector how to best manage the risks of facial-recognition technology.

A key recommendation suggests giving biometric information—such as DNA, facial scans, fingerprints, and iris scans—special protection and implementing high-quality privacy-impact and algorithm-impact assessments.

“We also recommend independent oversight of the collection, retention, and comparison of facial images and greater transparency around the sharing of facial images between state agencies, other jurisdictions, and the private sector.”

The issues involved in ethical use of facial recognition are rapidly evolving, with some aspects having changed even since the project concluded in late 2020, in particular with the concept of social licence and as a result of the COVID-19 pandemic.

“When technology gets too far ahead of the law, and the law can’t keep up, do you rely on acceptance or support from the public?

“If you told people two years ago that we would be living in a world where you had to check in to places you visited, they’d be like, ‘What?’. It is an interesting point to consider: Have we changed our views and accepted tracking technology? The role of public opinion on matters like this is massive,” she says.

Nessa is one of New Zealand’s leading experts and academic researchers in the field of facial-recognition technology. Together with Dr Andrew Chen, a research fellow at the University of Auckland, she has recently been commissioned by the New Zealand Police to explore the current and possible future uses of facial-recognition technology and what it means for policing in New Zealand communities.

Discover more—read the full report on facial recognition at www.wgtn.ac.nz/frt-report

Other articles