Privacy SOS

Pre-crime policing software determines your threat level based off of your social media use, criminal record

New Washington Post reporting describes disturbing pre-crime technology in place at the Fresno, California police department’s “real-time crime center.” The NCIC-like cop-cave plays host to license plate tracking databases, feeds from innumerable surveillance cameras, and government and corporate databases, which contain information about millions of people. But there’s also something new:

[P]erhaps the most controversial and revealing technology [at the crime center] is the threat-scoring software Beware. Fresno is one of the first departments in the nation to test the program.

As officers respond to calls, Beware automatically runs the address. The searches return the names of residents and scans them against a range of publicly available data to generate a color-coded threat level for each person or address: green, yellow or red.

Exactly how Beware calculates threat scores is something that its maker, Intrado, considers a trade secret, so it is unclear how much weight is given to a misdemeanor, felony or threatening comment on Facebook. However, the program flags issues and provides a report to the user.

In promotional materials, Intrado writes that Beware could reveal that the resident of a particular address was a war veteran suffering from post-traumatic stress disorder, had criminal convictions for assault and had posted worrisome messages about his battle experiences on social media. The “big data” that has transformed marketing and other industries has now come to law enforcement.

There are lots of problems with the data-driven approach to law enforcement, what some cops and corporations call “predictive” policing. Among those problems is the fundamental disregard for the presumption of innocence. Instead of seeing a crowd full of people with the same rights and responsibilities under the law, programs like Beware make it so that police see individuals categorized in a way that assumes some people are more likely than others to commit crimes in the future, and are presumably thus deserving of special police attention. The problem is exacerbated when, as in the case of Beware and similar technology sold by a corporation called PredPol, the algorithms that determine which people or neighborhoods are dangerous and should be more heavily policed are secret from the public.

We don’t know, then, exactly what makes someone appear to a cop as a green, yellow, or red “risk level.” What does a lifetime of non-violent drug offenses add up to? What about if you got in trouble for assault fifteen years ago, but haven’t been arrested since? How would that rank on this color coded scale of supposed dangerousness? How many anti-war tweets must one make before the algorithm suspects that person of terrorist sympathies? At what point do Facebook posts expressing concern about climate change veer into the seditious? If you tweet about #BlackLivesMatter, will the algorithm assume you are a danger to police? A central part of the problem with proprietary predictive policing software is that the code is secret, so we don’t know the answers to these questions. 

And the answers could mean the difference between life and death, and liberty and imprisonment, particularly for the people most targeted by police action, Black and brown people and the poor. 

It’s only a matter of time until police shoot someone dead and then later reveal that their predictive algorithm had coded the deceased person red, making the police officer who killed them quick to draw and shoot their weapon at the slightest “suspicious” movement. And as I wrote years ago, the combination of these kinds of predictive policing algorithms with facial recognition technology makes for some scary, dystopian stuff. Just imagine the day when police officers wear Google Glass or contact lenses enabling them to automatically scan everyone’s face in a crowd, with an algorithm like Beware’s then automatically coding that person as green, yellow, or red. To most people, I might look like a white person wearing a blue shirt and black pants. But to that officer, I’ll just be a shaded avatar, colored in with the dangerousness level I’ve been assigned by a private corporation in an algorithm that’ll never see the light of day.

Worse still, the fact that police are relying on computers to make decisions for them will lead to claims that the techno-cops are somehow immune from the racist legacy of American law enforcement. I can already hear police departments telling journalists that because their system only shows police three colors—green, yellow, and red—it can’t possibly be racist. But we know that decades of racially biased policing produces the data that feeds these systems, and so it’s not hard to guess what the algorithms will output—that is, more racial bias. 

The future is here. It’s time for our lawmakers to start acting like it. From the local to the state to the federal levels, elected officials should be asking tough questions of police departments and intelligence agencies that are using algorithms to police us. And after they’ve gotten some answers, they should make rules to ban or severely limit these practices. Data mining works great when your credit card identifies unusual buying behavior and alerts you that someone has stolen your card number. But when human freedom and even life are on the line, studies show data mining isn’t just dangerous to civil liberties—they also show that it doesn’t work.

In Fresno, city council officials held a hearing to talk about the police department’s use of this controversial pre-crime technology. During the hearing, one of the councilors asked a representative from Intrado, the company that makes the Beware software, to read his threat score. The councilman came up as green, but his home came up as yellow, perhaps because of someone who lived there before him. He wasn’t impressed. After what the Post describes as a “contentious” hearing on the new technology, the police department pledged to make changes to the program. The cops now say they are considering turning off the social media monitoring, and are doing away with the color-coded threat reporting. 

In 2016, we can’t keep pushing these conversations into the future. The technology is now, but the transparency, accountability, and oversight have not close to caught up. We need to close that gap, and fast.

© 2024 ACLU of Massachusetts.