The ethical questions that haunt facial-recognition research

A collage of images from the MegaFace data set, which scraped online photos. Images are obscured to protect people’s privacy. Credit: Adam Harvey/ based on the MegaFace data set by Ira Kemelmacher-Shlizerman et al. based on the Yahoo Flickr Creative Commons 100 Million data set and licensed under Creative Commons Attribution (CC BY) licences

By Richard Van Noorden
Nov 18, 2020

In September 2019, four researchers wrote to the publisher Wiley to “respectfully ask” that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity1.

China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang — which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces.