Privacy SOS

New federal government study confirms face surveillance isn’t ready for primetime

Face surveillance is dangerous when it works, and when it doesn’t. A new report from the National Institute of Standards and Technology (NIST), a non-partisan federal government research agency based at the US Department of Commerce, adds further data to a growing body of evidence showing the technology is far from ready for primetime.

The results of the NIST study, published today, indicate face recognition algorithms still have particular trouble properly classifying the faces of darker-skinned women, a finding originally revealed by MIT Media Lab researcher Joy Buolamwini in her groundbreaking 2017 study Gender Shades.

The NIST study examined 189 algorithms made by 99 companies, and used images collected from law enforcement agencies and the State Department—a total of 18.27 million images of 8.49 million people, including an FBI database containing 1.6 million mugshots. Crucially, NIST examined the performance of the algorithms in two different use cases, one-to-one searches and one-to-many searches. In one-to-one recognition, algorithms compare two images, to see if they are of the same person (think: iPhone user authentication). For one-to-many searches, the algorithm uses a probe image and compares that image to many other images, looking for a match (think: government surveillance). The tests looked for both false positives (when the algorithm mistakenly identifies a match where none exists) and false negatives (when the algorithm does not identify an actual match).  

The NIST researchers found that the face recognition algorithms generally performed more poorly when examining the faces of people of color, and women:

  • The rate of false positives was higher for Asians and Black people than for whites, for one-to one matches. “The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm,” the researchers write. “False positives might present a security concern to the system owner, as they may allow access to impostors.” 
  • Algorithms developed by US companies generally performed worse when performing one-to-one matches of Asian, Black, and Native faces. Among those groups, the performance for American Indians was the worst.
  • The rate of false positives for one-to-many searches was higher for Black women than any other demographic group. This finding is particularly disturbing given the over-inclusion of Black people in government mugshot databases, and the fact that women are the fastest growing prison population in the country. Black women are vastly overrepresented in prison and jail populations.

These key findings are fresh evidence that government agencies should not adopt face surveillance technologies absent rigorous public debate, legislative authorization, or transparency and oversight—if they should adopt them at all. Today, there are no federal regulations controlling how government entities use face surveillance technology, and most states also do not regulate the technology. Despite this lack of regulation, federal agencies like the FBI, CBP, and TSA, and police departments and other state and local government agencies have been adopting and using the technology in the dark, absent any meaningful privacy protections.

“While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied,” said Patrick Grother, a NIST technologist who authored the report. “While we do not explore what might cause these differentials,” Grother said, “this data will be valuable to policymakers, developers and end users in thinking about the limitations and appropriate use of these algorithms.”

One thing is for sure: No government agency should use technology that is inherently racially biased. That’s one reason the ACLU is calling on local and state governments to press pause on the use of face surveillance technology. In 2019, communities from Massachusetts to California passed local laws prohibiting government actors from using these biased and dangerous systems. Last week, Brookline became the second municipality in Massachusetts to ban government use of the tech, joining Somerville. And just last night, Alameda, California joined San Francisco, Berkeley, and Oakland in banning it on the west coast.

Now it’s time for the Massachusetts state legislature to act, to pass a statewide moratorium on government use of face surveillance. Take action now to join the movement, to say no to racist and dystopian technology, and yes to a free and just future for all.

© 2024 ACLU of Massachusetts.