In science fiction, facial recognition technology is a hallmark of a dystopian society. The truth of how it was created, and how it’s used today, is just as freaky.
In a new study, researchers conduct a historical survey of over 100 data sets used to train facial recognition systems compiled over the last 43 years. The broadest revelation is that, as the need for more data (i.e. photos) increased, researchers stopped bothering to ask for the consent of the people in the photos they used as data.
Researchers Deborah Raji of Mozilla and Genevieve Fried of AI Now published the study on Cornell University’s free distribution service, arXiv.org. The MIT Technology Review published its analysis of the paper Friday, describing it as “the largest ever study of facial-recognition data” that “shows how much the rise of deep learning has fueled a loss of privacy.”
Within the study’s charting of the evolution of facial recognition datasets, there are moments in history and facts about this technology’s development that are revealing. They show how the nature of facial recognition is that it’s a flawed technology when applied to real-world scenarios, created with the express purpose of expanding the surveillance state, with the effect of degrading our privacy.
Here are 9 scary and surprising takeaways from 43 years of facial recognition research.