Please note that by playing this clip YouTube and Google will place a long term cookie on your computer.
You’ve just been released from prison, and you’re expecting parole conditions along the lines of what are typically applied to someone convicted of the drug crimes you were sentenced for. Likely that means drug testing and weekly check-ins with your parole officer. Inconvenient, sure, but no big deal.
But when you meet with her for the first time, your parole officer tells you the conditions of your release will be much stricter. You’ll be forced to wear an ankle bracelet, and you’ll be on house arrest for the first six months of your parole. Why? A computer algorithm told her that there's a chance you'll commit a violent crime in the future. You are baffled. I didn’t do anything, you protest. But the armed robbery conviction from your early teenaged years (god do you regret that night!) told a computer program that you have a pretty good chance of doing something ugly upon release. And the city has authorized parole and probation to use that computer program to determine the conditions of your release. So that’s that.
Sounds like fiction, right? It’s not. People released on parole and probation in Baltimore and Philadelphia are now at the mercy of an algorithm created by a UPenn criminologist and statistician, Richard Berk, which uses criminal justice data to “predict the future.” ABC news reports that Washington D.C. is now operating the program as well, and plans to use it not just to predict murders, but also lesser crimes.
The pre-crime prediction system deployed in those cities doesn’t use hairless, alien-like people in tanks of water or Tom Cruise lookalike detectives to stop impending crimes. No, this “future crime prediction” technology is based on mathematics, applying to it a veneer of seriousness — it’s scientific, not sci-fi. But the underlying concept — predicting the future — raises the same issues as Minority Report.
Why wait for someone to commit a serious crime when the government could use technology to guess who is dangerous, and then act to control that person? What could possibly go wrong?
ABC news describes criminologist Richard Berk’s work:
Beginning several years ago, the researchers assembled a dataset of more than 60,000 various crimes, including homicides. Using an algorithm they developed, they found a subset of people much more likely to commit homicide when paroled or probated. Instead of finding one murderer in 100, the UPenn researchers could identify eight future murderers out of 100.
The researchers could identify 8 out of 100 future murderers. That means the algorithm has an 8% success rate, which means it fails to predict who will murder someone 92% of the time. But that miserable success rate isn't stopping Washington D.C. from operating the program in an expanded capacity.
Baltimore and Philadelphia are already using Berk's software to help determine how much supervision parolees should have. Washington, D.C. is now set to use the algorithm to help determine lesser crimes as well. If those tests go well, Berk says the program could help set bail amounts and suggest sentencing recommendations.
Predicting future crimes does sound, well, futuristic, said Berk. Even his students at the University of Pennsylvania compare his research to the Tom Cruise movie "Minority Report."
Nevertheless, he said, "We aren't anywhere near being able to do that."
The researchers cannot predict who will kill, when, they admit. But will they keep trying? Probably.
What's the matter with pre-crime programs?
ABC news reports prisoners’ advocates bristle at the program because it imposes a technocratic monitoring regime on people based on computer generated probability, "punishing people who, most likely, will not commit a crime in the future," said Shawn Bushway, an Albany based criminal justice professor familiar with Berk’s work.
The implementation of pre-crime algorithms in criminal justice policy raises a number of troubling questions that strike at the heart of how we are supposed to do business in this country.
What does Mr. Berk’s research say about free will? Or about our right to be treated as innocent until proven guilty? Perhaps tweaking people’s release conditions based on statistical probability doesn’t bother you — even if the statistical predictions are wrong 92% of the time. But the obvious next step in pre-crime deployment is arresting and detaining people before they have committed crimes. Given that the NSA already collects and stores all conceivable digital information in the world, there’s plenty of material for pre-crime data mining algorithms to sort through, to determine who among us might do something dangerous — or simply something that the state doesn’t like.
Imagine how such a program could be deployed to silence internal critics, or frighten potential whistleblowers. Such an algorithm could be used by government agencies that monitor the reading and communication habits of FBI agents, for example, to determine who is statistically likely to leak national security sensitive information to the press. The government may someday tell us that it could prevent the next Bradley Manning from leaking classified information if it is permitted to deploy such a system.
Defenders of data mining often highlight insurance and credit card company use of similar technologies to spot fraud and abuse. The practice of sorting through massive piles of data looking for patterns works quite well in those circumstances. But it's not a comparable science; what applies to credit card theft does not apply to murder.
There are two crucial reasons why data mining is acceptable — even desirable — in the insurance and credit card context and not in the criminal justice world.
First, there’s the issue of false positives. If your credit card company discovers something odd in your records, for example that you’ve started charging expensive computers once a week, it might cancel your card and call you. The company wants to make sure that it protects you (and itself) from theft and abuse. If in fact you are buying one computer per week for a month (it’s Christmas season and you’re feeling overly generous?), no harm is done. They reissue you a new card and you feel better knowing they have your back. But false positives in the criminal justice (or terrorism) context can mean extreme duress imposed upon the innocent, including imprisonment, harassment, monitoring, or even in extreme cases torture or death. They aren't comparable both because the stakes are different and because there simply isn’t enough data in the criminal justice or terrorism context to make the algorithms work.
Then there are the false negatives, those crimes that the algorithm misses. If Mr. Berk’s algorithm only identifies 8 percent of future murders, that means not only that large majorities of people targeted for increased monitoring are being unfairly punished (or later arrested, or in the terrorism context tortured, etc.), but also that the system fails to predict the vast majority of future murderers. That means parole officers and law enforcement could be directed to focus their efforts in the wrong places, having the perverse effect of making us less safe.
Finally, questions of efficacy aside, we need to seriously grapple with the moral and ethical questions raised by future crime prediction technologies. A good way of thinking about whether or not this is the right thing to do is to imagine yourself in the shoes of someone arrested for a crime you haven’t committed. The police and prosecutors tell you that you’re likely to hurt someone you love, based on results generated from a computer that chewed up and spit out your entire life history.
How would you feel? And perhaps more importantly, how could you prove that you aren't going to do something you haven't done yet? Therein lies the rub. You can't.