Artificial Intelligence and its Problematic Nature in Criminal Justice

Mark
5 min readNov 23, 2020
AI could have a massive impact on the future of the U.S. justice system.

Introduction

If there’s anything that has come to light about our criminal justice system in the United States, it's that our justice system is incredibly flawed and dysfunctional. With racial disparities, mass incarceration rates, Draconian drug laws, hyper criminalization of the average person, it becomes increasingly harder every day to support this broken system in the United States.

Many lawmakers and judicial heads have begun to take notice of these evident flaws in our justice system. However, instead of dismantling the flaws and reinstating more equitable and fair laws, lawmakers have opened the floodgates of artificial intelligence on these issues.

In hopes of fixing this deteriorating criminal justice system, the application and implementation of AI has become a full-fledged reality in the justice system. Many lawmakers have turned to forms of artificial intelligence such as “risk-based assessment tools”. But there's one problem with this. Artificial intelligence might just be exacerbating the same problem that the U.S. criminal justice system is frantically trying to fix: racial bias.

Risk Assessment Tools

Criminal risk assessments are programs that have been developed to predict a defendant’s risk of future misconduct. These risk assessments generate a risk level for defendants (usually from one to ten) that is formulated based on a variety of inputs, such as past and current crimes, evaluation(s) of the defendant, survey questions, etc. These risk levels are extremely important, as they constitute a large influence over sentencing terms and bail.

The main selling point behind these artificial intelligence tools is that they are supposedly 100 percent objective. While judges and their decision-making could be affected by emotions and biases, these algorithms simply utilize the impartial and unbiased nature of math, to generate equitable and fair risk levels. Proudly dubbed by these lawmakers as the “end-all, be-all of bias” in the court system, these algorithms are being adopted in many courts across the United States.

However, despite these companies' assurance that their assessment tools are objective and impartial, these claims do not seem to reflect the outcomes and results of these risk assessments.

Racial Biases Found in Kentucky’s Algorithm

In 2011, a new law passed by Kentucky lawmakers required judges to consult an algorithm when deciding to hold a defendant in jail before trial. With this law passed, Kentucky lawmakers believed that judges consulting this algorithm would make the system fairer and more equitable by setting more people free while also reducing the costs of the state’s justice system.

While the rate for black defendants released before trial remained the same (25 percent), the release rate of white defendants jumped up to 35% since the algorithm was instated.

However, this predictive sentiment did not seem to be the case. Before the 2011 law, the proportion of black and white defendants released to await trial at home was relatively the same. However, ever since the algorithm was used, lawmakers found a 10% increase in white defendants released while the black defendants released stagnated at 25 percent, the same percentage before the program was instated. Not only this, white defendants were more likely to labeled “low risk” for recidivating or committing another crime. On the other hand, black defendants were more likely to receive higher scores on recidivating, than white defendants did. Although Kentucky has significantly altered the programs and the input data they receive, this substantial difference between these two rates of recidivation has remained the same.

Racial Biases Found in Florida’s Algorithm

Another startling result of these algorithms instated into court systems was in Broward County, Florida. Similar to the algorithm used in Kentucky, Broward County had decided to instate an algorithm that would assist the judges in making decisions by providing a risk score for defendants. Similar to the results of Kentucky’s experiment with risk assessments, Broward Country found skewed disparities and trends among certain types of people.

According to a study done by ProPublica, ProPublica discovered that the program used in Broward County was two times more likely to label a black defendant as a future criminal, compared to white defendants. Not only that, but white defendants were also labeled more times “low risk”, compared to black defendants. Studies found that the risk levels for white defendants were heavily skewed towards lower risk levels, while risk levels for black defendants were evenly spread out.

This chart shows how risk levels for black defendants were evenly spread out from one to ten, which does not happen with the risk levels of white defendants.
This chart shows how risk levels for white defendants were heavily skewed towards lower risk levels, unlike the risk levels for black defendants.

Out of all the defendants predicted by the algorithm to commit violent crimes, only 50% of those defendants did.

Why is this happening?

So, some may be asking, why is this happening? Well, the answer to that isn’t easy. Because companies in this industry don’t publicly reveal the questions and survey data that the algorithms take into account, we don’t know if this racial bias is caused directly by that, or another aspect of the program. However, we are sure of another perpetrator that may be indirectly pushing these racial biases: bad training data.

Bad training data is input data that is either flawed, small, vague, nondescriptive, etc. While bad training data can technically train an artificial intelligence program to perform its function, bad training data can’t guarantee that the program will do a good job, which is what happened with these algorithms used in these courts.

These risk assessment tools are powered by trained algorithms on historical crime data. These algorithms are trained to utilize statistical methods and processes to determine connections and discover patterns. If these algorithms are trained on historical crime data, they will try to find correlations between crime and other aspects.

For example, if an algorithm discovered that higher income was correlated with lower recidivism, the algorithm would naturally give defendants from high-income backgrounds a lower score of recidivating. This is quite problematic, as these risk assessment tools are taking in bad training data and producing biased results.

Conclusion

Artificial intelligence is a powerful tool that will change many aspects of our lives in the future. One of those aspects in the future that will forever be changed by artificial intelligence, is the criminal justice system and the way it works here in the United States. So, if we hope to replace racial bias in the court system with fair and equitable risk assessment tools that accurately predict the probability of a defendant recidivating, we must ensure that there is no bias, whether implicit or explicit, that would alter the decision-making process of those algorithms.

This article is part one of a three-part series composed by me that focuses on the implementation of artificial intelligence in the United States justice system. Next week we’ll talk about solutions towards fixing these racial biases in these risk assessment programs. Thanks for reading!

--

--