Privacy SOS

Bias All the Way Down: Research Shows Domino Effect When Humans Use Face Recognition Algorithms

Earlier this month, Portland, Oregon joined over a dozen other municipalities nationwide that have banned government use of face surveillance technology. And with a new study confirming that face recognition algorithms can influence human perceptions of faces, it’s become even clearer: police should not use face surveillance, and more localities must stop the spread of this technology.

Back in June, we learned about the case of Mr. Robert Williams, a Black man who was wrongfully arrested because of an erroneous face recognition match. Mr. Williams only found out that face recognition technology was used to ‘identify’ him because, during the Detroit Police interrogation after his arrest, the cops let this fact slip. “The computer must have gotten it wrong,” an officer told him.

Had it not been for this seemingly accidental disclosure, Mr. Williams would never have known the police used this technology to misidentify and detain him. Since police generally do not disclose their use of this technology to arestees, we don’t know how many other people have been wrongfully arrested because of the use of this faulty, biased technology. In Massachusetts, for example, police have conducted at minimum hundreds of face recognition searches per year since 2006—but attorneys with the state public defender’s office have not been notified that the technology was used in their clients’ cases.

This widespread failure to nofity defendants constitutes a due process crisis, because defendants cannot challenge the legitimacy of a technology if they don’t know it’s been used in their case. To excuse these due process violations, and their use of technology that has well-documented bias problems, law enforcement agencies and vendors claim face recognition is only used to generate “investigative leads” to flag potential suspects, and not used to positively identify suspects. The police have gone to lengths to assure the public that the technology is not in the driver’s seat, and that there’s always a “human in the loop,” responsible for making decisions leading to arrests.

Law enforcement’s justification for its secrecy surrounding use of face recognition in criminal cases has always been suspect. But a new peer-reviewed paper published in the scientific journal Public Library of Science (PLOS ONE) shows that police claims that face recognition technology is just an innocent “investigative lead” are flat out wrong. Instead, the paper confirms that suggestions from face recognition algorithms bias humans’ cognitive decision-making process when identifying faces. The researchers found that an inaccurate result from a face recognition scan causes a domino effect which biases any follow-up analysis by a human.

In other words, the technology has an outsized influence in police decisionmaking, whether the police know it or not.

The Research: How Do Face Recognition Algorithms Affect Human Perceptions of Faces?

In the study, 300 volunteers each looked at 12 image pairs of faces. Some of those pairs showed the same person; some of them showed two different people.

First, the volunteers were split into three groups. The first group was told a computer had already analyzed the face pairs; the second group was told the face pairs had already been analyzed by someone else; and the third group served as a control. The third group was shown images of face pairs with no labels, while the first two groups were shown face pairs either labeled “same person” or “different people.”

Then, researchers asked all volunteers to rate the similarity between the faces on a scale from negative three (-3) to three (3), where three denoted, “I am absolutely certain this is the same person” and negative three denoted, “I am absolutely certain these are different people.” Each individual was also asked whether they would trust a computer, another person, or themselves to “identify a person.”

Analysis of the responses shows that the presence of labels in both the first and second groups had a significant effect on the volunteers’ responses. When humans were asked to determine if two photos were of the same person, the presence of labels from a computer algorithm skewed the humans’ decisions. For example, when told a computer had previously labeled two faces as the same, a person was more likely to decide the two faces were the same person (even when they were of different people), and vice versa.

In fact, the labels introduced a bias so strong that they nudged people a whole number on the rating scale, such that the presence of a “same” label would cause an undecided volunteer to rate a photo pair as one (1), “I am somewhat certain this the same person” instead of zero (0), “I am not sure.”

It is important to note that a previous study had already found that providing labels from an algorithm alongside a pair of faces affected human decision-making. However, its authors assumed this effect was because the labels distracted from the task at hand, effectively reducing humans’ sensitivity to different faces.

This new study rejects that assumption, showing that sensitivity didn’t change between cases where volunteers were given labels or not. Instead, the researchers show that changes in decision outcomes came from the bias which the labels induced in humans’ internal decision-making processes. To put it simply: If a face recognition system says Robert Williams did the crime, the police are more likely to believe he did, even if their own eyes show he didn’t.

How Does This Research Affect The Use Of Face Recognition Technology By Law Enforcement?

This study shows that even using face recognition algorithms only to generate investigative leads biases how a human reviewing the result recognizes faces. Incorporating face recognition algorithms into a human process alters human responses. Thus, the selection that a human makes from a group of images provided by the algorithm “carries over” the bias in the system. Make no mistake: The process is biased all the way down.

The study also suggests that humans’ mistrust of a label source doesn’t prevent a biased decision. When asked about their trust in different entities to identify someone, respondents showed significantly less confidence in other people. The responses show that 18 percent of volunteers said they would not trust another person to identify an individual, compared to 9 percent not trusting themselves, and 8 percent not trusting computers. However, during the face-matching task, volunteers’ responses were the same regardless of the source of the labels: their behavior was affected by labels put by systems they had claimed not to trust.

The implications of such findings are enormous. In the context of face recognition algorithms being used to produce investigative leads, this suggests that even a human who understands that such algorithms are biased would not be immune from replicating that bias. The issue appears to lie within the cognitive process of human face recognition, which cannot be consciously controlled.


Ultimately, this new research confirms what advocates against the use of face recognition have been saying for years: The “human-in-the-loop” framework is not the panacea for inaccuracies in face recognition systems, like the police claim.

Face recognition technology is flawed in all its aspects and applications. We already knew about biases and inaccuracies when analyzing Black, brown, and feminine faces. We also know how threatening its use is when it comes to civil rights, civil liberties, and privacy rights.

Now, we know the use of face recognition technology has lasting effects even on human reviewers, even if they claim to be mistrustful of the tech. We cannot wait until more people are wrongfully arrested due to these intractable problems at the intersection of technology and human bias. The use of face recognition technology by the government must be banned. Take action now to ensure a free future for all people.

Lauren Chambers is the staff technologist at the ACLU of Massachusetts. Emiliano Falcon-Morano is policy counsel for the ACLU of Massachusetts’ Technology for Liberty Program.

© 2024 ACLU of Massachusetts.