Privacy SOS

Predictive policing, with roots in business analytics, relies on using advanced technological tools and data analysis to take proactive measures to “pre-empt” crime. 

Predictive policing has been closely identified with the Los Angeles Police Department, whose Chief of Detectives Charlie Beck defines it in these terms:

With new technology, new business processes, and new algorithms, predictive policing is based on directed, information-based patrol; rapid response supported by fact-based prepositioning of assets; and proactive, intelligence-based tactics, strategy, and policy. The predictive-policing era promises measurable results, including crime reduction; more efficient police agencies; and modern, innovative policing.

It essentially applies the Total Information Awareness approach to policing:

Advanced analytics includes the systematic review and analysis of data and information using automated methods. Through the use of exploratory graphics in combination with advanced statistics, machine learning tools, and artificial intelligence, critical pieces of information can be identified and extracted from large repositories of data. By probing data in this manner, it is possible to prove or disprove hypotheses while discovering new or previously unknown information. In particular, unique or valuable relationships, trends, patterns, sequences, and affinities in the data can be identified and used proactively to categorize or anticipate additional actions or information. Simply stated, advanced analytics includes the use and exploitation of mathematical techniques and processes that can be used to confirm things that we already know or think that we know, as well as discover new or previously unknown patterns, trends, and relationships in the data.

Regarded as a refinement of “intelligence-led policing” – which came to the US from the UK where it has led police to focus on research-based approaches rather than responding to service calls – predictive policing appears to take another large step away from community policing and accountability. And while the companies selling the technology make all sorts of bold claims about its efficacy, there is no public evidence to suggest that the algorithms do anything to promote public safety.

A feedback loop of injustice

The predictive policing model is deceptive and problematic because it presumes that data inputs and algorithms are neutral, and therefore that the information the computer spits out will present police officers with objective, discrimination-free leads on where to send officers or deploy other resources. This couldn't be farther from the truth.

As Ronald Bailey wrote for Reason, “The accuracy of predictive policing programs depends on the accuracy of the information they are fed.” Many crimes aren't reported at all, and when it comes to the drug war, we know for certain that police don't enforce the law equally. 

Take marijuana arrests as an example. We know that black people and Latinos are arrested, prosecuted and convicted for marijuana offenses at rates astronomically higher than their white counterparts, even if we adjust for income and geography. We also know that whites smoke marijuana at about the same rate as blacks and Latinos. 

Therefore we know that marijuana laws are not applied equally across the board: Blacks and Latinos are disproportionately targeted for associated arrests, while whites are arrested at much lower rates for smoking or selling small amounts of marijuana.

Now consider that these arrest data are put into computer programs instructed to spit out information to officers about where to target police patrols — what's called predictive policing. The returned intelligence telling police departments where to target their patrols is supposedly accurate because arrest data fed into a computer algorithm produced it. 

But if historical arrest data shows that the majority of arrests for marijuana crimes in a city are made in a predominately black area, instead of in a predominately white area, predictive policing algorithms working off of this problematic data will recommend that officers deploy resources to the predominately black area — even if there is other information to show that people in the white area violate marijuana laws at about the same rate as their black counterparts. 

If an algorithm is only fed unjust arrest data, it will simply repeat the injustice by advising the police to send yet more officers to patrol the black area. In that way, predictive policing creates a feedback loop of injustice.

A thought experiment may help elucidate the problem.

It's sort of a cultural axiom in the United States that high-powered bankers and lawyers have tastes for expensive cocaine and prostitutes, but because these kinds of illegal activities take place in boardrooms and fancy hotels, instead of on street corners in poor neighborhoods, and because the people doing them are powerful officials with ties to political leaders and access to piles of money, there aren't very many arrests for those crimes in this demographic. 

If police arrested lots of bankers and lawyers for cocaine use and for hiring expensive sex workers, we might see predictive policing algorithms sending cops to patrol rich suburbs or fancy hotels in downtown areas. Instead, the algorithms simply reproduce the unjust policing system we've got, and dangerously, add a veneer of 'objectivity' to that problem. The information came out of a computer, after all, so it must be accurate!

Law officers like to say that predictive policing helps them dodge questions about racism and unequal policing. But data isn't neutral, and neither are the algorithms tasked to sort through and make sense of those pieces of information. 

© 2024 ACLU of Massachusetts.