Privacy SOS

Senators join activists, shareholders, and tech workers in demanding answers about face surveillance

In recent months the ACLU, Amazon shareholders, Microsoft, activists, scholars, and legislators have raised concerns about the use of facial recognition technologies in the United States. Today, despite a total lack of regulation, law enforcement uses face surveillance to identify and track people, private companies use the technology to authenticate customers, and employers use face recognition-driven “sentiment analysis” during the hiring process. People can be arrested, denied access to their flight, and skipped over for a job they are qualified for on the basis of algorithms that have been shown to perform poorly when evaluating dark-skinned women, as opposed to white man. Despite this crisis, we have seen very little action from Congress.

But on Tuesday, Senator Kamala Harris (D-CA) joined the ranks of those raising concerns about facial recognition when she fired off letters to the Federal Bureau of Investigation (FBI), Federal Trade Commission (FTC), and Equal Employment Opportunity Commission (EEOC) calling for agency attention to the threats posed by the technology, particularly to women and people of color.

Facial recognition systems use images of human faces and algorithms to identify, draw connections, and make assumptions about individuals. These systems are used every time a social media platform asks if you want to be tagged in a photo or gives you suggestions about who should be tagged, and whenever your face is used as a password to unlock a device. But while consumers may be happy to use their face to unlock their phone, or identify their friends on Facebook, the government’s use of face surveillance technologies to identify, track, and monitor people raises civil rights concerns that go to the heart of what it means to live in a free and open society. In the commercial context, the use of face surveillance by big box retail chains remains almost entirely unregulated, despite the significant risks to privacy and liberty it poses. And when private employers use untested, potentially inaccurate face surveillance algorithms in the hiring process, it can have the impact of exacerbating existing inequalities.

All of these deployments of face surveillance raise racial and gender justice concerns in light of recent studies that have highlighted face recognition technology’s failures to accurately identify women and people of color. Senator Harris references the ACLU’s findings that Amazon’s Rekognition face algorithms misidentified 28 congresspeople as arrestees in a mugshot database. Those misidentified were disproportionately members of color. Meanwhile, research by MIT’s Joy Buolamwini shows that even the leading facial recognition algorithms are 30 times more likely to fail at identifying darker skinned women than lighter skinned men.

This research has made front page news, but the FBI’s use of facial recognition technologies has quietly continued unabated. Enter Senator Harris and her colleagues from the Congressional Black Caucus.

The Senators’ letter to the FBI demands answers from the powerful federal law enforcement and intelligence agency about what it has done to address problems identified in a 2016 Government Accountability Office (GAO) report on the Bureau’s use of face surveillance. In that report, the GAO recommended that the FBI take steps to ensure that the algorithms the Bureau uses are accurate. It’s not clear what, if anything, the FBI has done to address these concerns.

Now Senators Harris, Cory Booker (D-NJ), and Cedric Richmond (D-LA) are demanding to know what, if anything, the Bureau has done since the publication of the report two years ago.

The stakes for privacy and civil rights are extraordinarily high. The FBI’s Next Generation Identification-Interstate Photo System (NGI-IPS) contains at least 30 million photos, and the Bureau has agreements with state Registries of Motor Vehicles to access as many as 64 million driver’s license photos. In her letter to the FBI, Senator Harris calls for the FBI’s facial recognition systems to be audited for bias, and seeks an explanation of what the FBI is doing to ensure that its software does not consistently underperform on the basis of race, skin tone, gender or age.

In her letter to the FTC, Sen. Harris—joined by Senators Richard Blumenthal (D-CT), Cory Booker (D-NJ), and Ron Wyden (D-OR)—highlights that facial recognition technologies can be used in unfair and deceptive ways. When facial recognition technologies are used to identify shoplifters, for example, errors will likely disproportionately impact African American women, potentially resulting in unjustifiable arrests. Stores are not required by law in most states to disclose the use of facial recognition systems, and those systems are not required to be tested for racial or gender-based bias.

The Senators’ letter asks what the FTC is doing about commercial deployments of face surveillance, and requests that the agency work on an assessment of how facial recognition technologies may be biased. The Senators also ask the FTC to commit to producing best practices for using facial analysis after its upcoming hearing on Algorithms, Artificial Intelligence, and Predictive Analysis.

In their letter to the Equal Employment Opportunity Commission, Senators Harris, Patty Murray (D-WA), and Elizabeth Warren (D-MA), highlight the dangers involved when employers use facial recognition technologies to assess fitness for employment. The letter paints a disturbing picture of how these technologies can be used to perpetuate inequality:

Suppose, for example, that an African American woman seeks a job at a company that uses facial analysis to assess how well a candidate’s mannerisms are similar to those of its top managers. First, the technology may interpret her mannerisms less accurately than a white male candidate. Second, if the company’s top managers are homogeneous, e.g., white and male, the very characteristics being sought may have nothing to do with job performance but are instead artifacts of belonging to this group. She may be as qualified for the job as a white male candidate, but facial analysis may not rate her as highly because her cues naturally differ. Third, if a particular history of biased promotions led to homogeneity in top managers, then the facial recognition analysis technology could encode and then hide this bias behind a scientific veneer of objectivity.

The scenario they lay out to the EEOC is bone chilling and dystopian. Unfortunately, it’s based on current practices, and may be unfolding behind the scenes every day.

Yesterday the ACLU filed a complaint against Facebook, calling on the EEOC to determine whether the company’s targeted ad platform violated Title VII of the Civil Rights Act of 1964 when it sold job ads to employers and agreed to only show those ads to men. In their letter to the EEOC this week, these female Senators ask whether the agency has heard any complaints related to misuse of facial recognition systems, and requests that the EEOC develop guidelines for employers outlining how face recognition technologies may and may not be used during the hiring process.

Congress needs to step up to the plate to deal with issues related to algorithmic fairness, discrimination, and civil rights. Senator Harris’ emerging leadership on these matters is an important step in the right direction, especially because she represents California’s Silicon Valley, where many of these technologies are developed and sold. Harris wants answers from the FTC and EEOC by September 28, 2018, and from the FBI by October 1, 2018. We’ll be watching.

This blog post was co-authored by Siri Nelson and Kade Crockford.

© 2024 ACLU of Massachusetts.