A Quick Look at Facial Recognition with Micah Howser
We’re seeing a number of concerning developments in the area of biometrics and privacy as of late, and one that’s made the headlines numerous times is facial recognition software.
Facial recognition systems analyze images of the human face in order to readily identify individuals, typically for entry into smartphones, airports and corporate offices, but also increasingly for law enforcement purposes.
Last February, leading facial recognition software company Clearview AI was hit with a data breach. In the case, an intruder “gained access” to its customer list and other data. According to Clearview AI’s attorney, what hackers did not access was the company’s crown jewels, a database of more than 3 billion images of people scraped from social media sites and other well-used applications. (Just a month before the breach, in January 2020, Facebook paid out a record $550 million settlement when data collected for its face tagging feature was show to violate the Illinois Biometrics law, illustrating the fact that consumers are growing wary of the ways in which big tech firms collect data without their consent.)
What should alarm all of us is that it’s very likely our biometric information is in the Clearview AI database. Unlike our log-ins or credit card data which we have willingly shared with retailers, Facebook and other companies, we did not consent to share our personal facial photos with a tech startup most of us had never heard of until it was breached. It’s safe to say most people have no idea that their facial image data has already been collected, or who is using it and to what end. What’s more, we will not be notified if it’s breached, stolen or sold on the black market. We simply have no guarantee that this data is held securely and in the data security environment we live in, we might assume it’s not. You can change your stolen credit card, but you can’t change your stolen biometric face.
Clearview AI’s stated aim is to share its data with law enforcement agencies to identify perpetrators and solve criminal cases. Indeed, according to The New York Times, more than 600 agencies have already started to use this technology. Much of the time it has helped to identify criminals, but in some instances, the use of facial recognition databases has led to wrongful arrests. It is heavily weighted in favor of racial biases, misidentifying people of color. It’s already been used to track protestors and on college campuses, which could infringe upon civil liberties.
It also opens up cans upon cans of worms for unforeseen privacy and security risks. Imagine someone on the street taking your photo in a public setting without your knowledge and using it to find out your name, address, and other personal data. It could be used to spoof individual identities to carry out criminal schemes. It might be used for unsavory, marketing practices or racial profiling. When it comes to IoT connected devices, such as cars, manufacturing equipment or biomedical devices, the stakes for hacking via facial recognition can become much higher, risking physical injury and even death.
For the time being, this is a risk that unfortunately is not well-enough understood or discussed to garner widespread action. All that individuals can do is try to ensure that their personal privacy is well guarded. But while you can control what data a company like this gathers on you, for instance, by changing your settings on social media to hide your photos from search engines, if you’re just changing these settings now, chances are it’s too late. I already tried to find out what Clearview AI had on me in its database via the CCPA data subject access requests, to no avail.
Just last week, Clearview AI stated it would roll out new compliance features to help promote ethical use of the technology. Still, it will be up to police departments to monitor individual officers and actually enforce compliance. Some cities have started to ban the use of facial recognition technology, but absent coherent federal law, we’re leaving it up to users to define ethical boundaries and respect privacy limits. While class actions like the Facebook case prove that the Illinois Biometrics law and others are gaining traction in court, that’s still a stopgap measure. In the meantime, we’re still putting a lot of trust in a lot of people to get a very powerful tool right.