By: Ankitha Kasavaraju
Edited by: Andrew Bongiovanni
Graphic by: Arsh Naseer
Background
Artificial Intelligence (AI) has increased in both its ability and relevance, with 68 bills introduced across the United States since 2022 to regulate the ever-changing applications of AI and address growing concerns.1 In 2022, attitudes towards AI were widely dispersed, with 45% of respondents equally concerned and excited about AI, while 37% of respondents were more concerned.2 Concerns regarding AI have not decreased, however public sentiment has shifted with the percentage of adults believing AI to be more beneficial increasing from 16% between 2023 and 2024.3,4 In the past three years, AI has proven its ability to assist in creating more efficient processes and take over manual tasks in shorter amounts of time.5 The ability of AI to improve efficiency and generate accurate predictions may apply to the judicial system. Currently, criminal justice activities cost around $74 billion annually, with spending on corrections and police protections continually increasing.6
Including AI in the judicial system may aid in creating an efficient and effective judicial system, assisting with bringing offenders into the judicial system, as well as releasing those who pose no threat. Understanding the repercussions and benefits of incorporating AI technologies in democratic institutions requires a thorough understanding of how the technology functions and how it is trained. While AI and machine learning (ML) have often been used interchangeably in common discourse, these terms are distinct. Artificial intelligence is the use of machines to perform tasks that would otherwise require human operation, whereas machine learning is an approach to creating an AI application. Machine learning utilizes data sets to find patterns and formulate statistical algorithms that can be used on newer and larger data sets.7
Methodology
This analysis seeks to answer the policy question: would implementing machine learning improve fairness in the judicial process, and would it increase process efficiency in the judicial system? Below is the methodology for each of these criteria from which this policy question can be answered.
Fairness
The fairness of the judicial system will be evaluated by examining predictions on recidivism––the likelihood of being convicted for another crime after receiving legal sanctions––as fairness is crucial for an accurate prediction of one’s likelihood to re-offend. A 2018 and 2016 study on the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system will be used to analyze the accuracy of the recidivism algorithms. Next, a 2020, 2022, and 2023 study will be used to analyze recidivism in new algorithms and potential biases. The evaluative criteria for determining AI’s fairness will be a predictive power of .70 based on recidivism research.8
Efficiency
The efficiency of algorithmic platforms in the criminal justice system will be measured by the accuracy of selected predictive crime algorithms. The first study by The Markup will analyze the PredPol software that is used by police officers to predict ‘areas-of-crime’ to gauge the accuracy of the model’s predictions. Next, the effectiveness of ML in crime predictions will be analyzed through selected cases in Canada, Philadelphia, and Chicago. The evaluative criteria to analyze the efficiency of AI will be a prediction rate of 70%.9
Results
Fairness
The use of statistical programs to predict outcomes for offenders is a well-established practice. The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), is one such algorithm created in 1998 that encompasses 137 characteristics about an individual and their crime history.10 Results of a 2018 study demonstrated weak accuracy in predicting re-offenses with this algorithm, with an accuracy rate of 65%, compared to 66.7% for human participants.11 A 2016 study found the COMPAS system to exhibit racial bias, with Black individuals twice as likely to be marked as having a high recidivism risk.12 A 2020 study found that algorithms had consistently higher levels of accuracy than human participants, with logistic regression and the COMPAS system having as much as a 30-percentage point higher accuracy rate. However, the same study found that with feedback to the human participants, the difference between the algorithm accuracy and human accuracy was reduced to around 10 percentage points or less. With a Level of Service Inventory-Revised (LSI-R) assessment, the accuracy of recidivism predictions were approximately 90%.13
A 2022 study further assessed AI algorithms in predicting the likelihood of arrest of parolees, finding the predicted likelihood of arrest for any reason to be .79 and .72 for violent arrests.14 A recent study published under Tulane University, Pennsylvania State University, and Kennesaw State University studied the effects of judges using AI recommendations on recidivism rate. The predicted recidivism rate was 14.11% when both judges and AI tools recommended an alternative form of punishment, compared to 17.32% when judges differed from the AI recommendation for alternative punishment, suggesting there is added accuracy from an AI-assisted prediction.15 On average it was found judges are more lenient towards female offenders, but the AI tool was found to mitigate this bias substantially. While race did not impact the likelihood of receiving an alternative punishment when not receiving an AI recommendation, Black offenders received 5.6% fewer alternative punishments than White offenders.16 Another 2023 study on prediction models for recidivism was done with a Decision Tree algorithm that exhibited 98.8% predictive accuracy, with education found to be the most significant factor.17
Efficiency
Creating an efficient judicial system requires process speed, process accuracy, and the ability to reduce crimes. Predictive policing algorithms have been one such method used to predict areas with a high likelihood of crime. But, the crime prediction software PredPol was found to have stark biases against Black and Latino neighborhoods, with the algorithm targeting majority Black and Latino neighborhoods 400% more than White neighborhoods in some areas. When PredPol’s predictive ability was assessed, it was found to be less than 50% accurate in the Plainfield Police Department.18 Burglary predictions were the least accurate crime predicted, with a success rate of .1%. Other studies have shown poor accuracy of algorithms in predicting crime; a 2018 study in Canada on machine learning-based crime prediction found the model to be only 39% to 44% accurate.19 Predictions of white-collar crimes did not fare much better, as a study focused on tracking tax crime using machine learning was found to have an accuracy of only 66.2%.20
A study concerning crime in Philadelphia found slightly higher accuracy levels (approximately 69%) with their algorithm.21 In another study done in 2017 on violent crimes, a decision tree algorithm was used for prediction, and the system predicted violent crimes with 93% accuracy (165 out of 175 times).22 In a study done on crime prediction in Chicago, researchers conducted regression analyses to find the most accurate model before training the machine learning model, finding an accuracy rate of .84 for the decision tree model.23 However, a recent study conducted by the University of Chicago employed an algorithm to predict crime, finding 90% accuracy in predicting crimes a week in advance. The algorithm was able to find inequalities in efforts to resolve crimes between wealthier and lower-income neighborhoods.24
Conclusion
The use of AI and machine learning algorithms to meet the needs of the judicial system has faced both skepticism and hope. In analyzing the use of AI and ML in the fairness of sentencing, specifically investigating its usage in recidivism, the results demonstrate that many new predictive models for recidivism met the evaluative threshold of 70% accuracy.25,26,27 Importantly, some studies found that denying parole had increased the likelihood of recommitting a crime once the offender was released, making it difficult for studies to accurately quantify how efficient algorithms were able to predict recidivism.28 Crime prediction algorithms yielded similar results regarding accuracy, with some meeting, and others failing to meet, the evaluative criteria put forth in this analysis. A large indicator of whether a crime prediction algorithm was successful is the statistical model that was used, with decision trees being one such algorithm that yielded higher rates of accuracy.29,30 Despite different outcomes in the accuracy of crime predictions, the studies have revealed biases at play in the system. While the PredPol algorithm targeted Black and Latino neighborhoods, the algorithm by the University of Chicago identified demonstrable biases in crime responses by police. While some algorithms have undeniably increased their predictive accuracy concerning crime, weighing the potential repercussions of these algorithms on marginalized communities is critical to the fairness of the justice system.
Recommendations
While the results of this analysis demonstrate AI and machine learning’s ability to produce accurate results regarding recidivism and crime predictions, it is still subject to biases. An algorithm’s effectiveness varies depending on the way it is employed, as well as the quality of the data from which it learns. Governments should first focus on compiling data sets for training that would reduce bias in algorithms. This may require changing policing practices to build trust within communities, as crimes are reported at different levels in communities with differing socioeconomic statuses. Not only can fair policing practices produce a larger data set to cultivate more efficient algorithms, but they may potentially reduce bias present in older data sets if performed correctly. Existing Biases, such as those found in the COMPAS algorithm, cannot be ignored, and must be addressed here and in all algorithms before large-scale integration into the criminal justice system.
Current risk assessment algorithms, however, may be used to assess fairness in the judicial system. For example, governments can allow the use of algorithms to assist in keeping judges accountable for sentencing practices and to identify judges who are being unfair. Crime prediction algorithms can be used as a method to evaluate police action and target areas with specific safety needs, rather than influencing the presence of police directly. As researchers from the University of Chicago found through their algorithm, the amount of attention given to crimes varied by socioeconomic status.31 By using algorithms to identify biased practices and target policies towards community needs, crime may be reduced, creating a more efficient system. While structural changes are necessary to mitigate biases found in data and in subsequent AI platforms, governments should also look to improve their algorithms.
On the technical side of machine learning and AI implementation in the judicial area, research and data science divisions should be established to tailor algorithms to locations to achieve the highest rate of accuracy. To capitalize on the effects of the advancements of AI and machine learning, governments should pivot to reducing bias in data sets that train algorithms by establishing metrics to measure bias and policies that provide for regulation of biases in algorithms. Additionally, divisions should be established to analyze the best types of algorithms for specific locations and communities. Algorithms that are predictive of recidivism and policing should also be transparent and publicly available, as to keep organizations accountable. Effective reductions of bias through structural interventions as well as strategic implementations, present an opportunity for increased fairness and efficiency in the judicial system, and thus a stronger and more equitable democracy.
Works Cited
[1] LexisNexis. n.d.
[2] Rainie, Lee. 2022. “How Americans Think about Artificial Intelligence.” Pew Research Center, March 17. https://www.pewresearch.org/internet/2022/03/17/how-americans-think-about-artificial-intelligence/.
[3] Stevens Institute of Technology. n.d. “Stevens Technology.” https://assets.stevens.edu/mviowpldu823/4jgZ0Ec1lDHPLUYsFprepe/60112ee74b0d3aed2882baf94c9fcb10/PhD_DS-AoL-Plan_2022S.docx.
[4] Stevens Institute of Technology. 2024. “Stevens’ TechPulse Report Reveals the Changing Landscape of Americans’ Outlook on AI.” April 9. https://www.stevens.edu/news/stevens-techpulse-report-reveals-the-changing-landscape-of-americans-outlook.
[5] Nakandakari, F. 2023. “How to Leverage AI for Internal Task Automation.” Jestor, July 3. https://jestor.com/blog/a-i/how-to-leverage-ai-for-internal-task-automation/.
[6] U.S. Department of Justice. 1993. “Performance Measures for the Criminal Justice System.” https://bjs.ojp.gov/content/pub/pdf/pmcjs.pdf.
[7] Abail, Issam Eddine. n.d. “Technology Primer: Artificial Intelligence & Machine Learning.” Belfer Center for Science and International Affairs. https://www.belfercenter.org/publication/technology-primer-artificial-intelligence-machine-learning.
[8] Equivant. n.d. “Practitioner’s Guide to COMPAS Core.” https://www.equivant.com/wp-content/uploads/Practitioners-Guide-to-COMPAS-Core-040419.pdf.
[9] Equivant. n.d. “Practitioner’s Guide to COMPAS Core.”
[10] Equivant. n.d. “Practitioner’s Guide to COMPAS Core.”
[11] Dressel, Julia, and Hany Farid. 2018. “The Accuracy, Fairness, and Limits of Predicting Recidivism.” Science Advances. https://www.science.org/doi/10.1126/sciadv.aao5580.
[12] Newcomb, Kelly. 2024. “The Place of Artificial Intelligence in Sentencing Decisions.” Inquiry Journal, March 28. https://www.unh.edu/inquiryjournal/blog/2024/03/place-artificial-intelligence-sentencing-decisions.
[13] Angwin, Julia, Jeff Larson, Lauren Kirchner, and Surya Mattu. 2016. “Machine Bias.” ProPublica, May 23. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[14] Lin, Z. “Jerry,” Jung, J., Goel, S., and Skeem, J. 2020. “The Limits of Human Predictions of Recidivism.” Science Advances. https://www.science.org/doi/10.1126/sciadv.aaz0652.
[15] Laqueur, Hannah S., and Ryan W. Copus. 2023. “An Algorithmic Assessment of Parole Decisions.” Journal of Quantitative Criminology, February 1. https://link.springer.com/article/10.1007/s10940-022-09563-8.
[16] Ho, Y.-J. (Ian), Walid Jabr, and Yining Zhang. 2023. “AI Enforcement: Examining the Impact of AI on Judicial Fairness and Public Safety.” SSRN, August 9. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4533047.
[17] Kovalchuk, O., Karpinski, M., Banakh, S., Kasianchuk, M., Shevchuk, R., and Zagorodna, N. 2023. “Prediction Machine Learning Models on Propensity Convicts to Criminal Recidivism.” MDPI, March 3. https://www.mdpi.com/2078-2489/14/3/161.
[18] Sankin, Aaron, and Surya Mattu. 2023. “Predictive Policing Software Terrible at Predicting Crimes.” The Markup, October 2. https://themarkup.org/prediction-bias/2023/10/02/predictive-policing-software-terrible-at-predicting-crimes.
[19] Kim, S., Joshi, P., Kalsi, P. S., and Taheri, P. 2018. “Crime Analysis Through Machine Learning.” IEEE Conference Publication. https://ieeexplore.ieee.org/document/8614828/.
[20] Jenga, K., Catal, C., and Kar, G. 2023. “Machine Learning in Crime Prediction.” Journal of Ambient Intelligence and Humanized Computing, February 2. https://link.springer.com/article/10.1007/s12652-023-04530-y.
[21] Tabedzki, C., Thirumalaiswamy, A., and van Vliet, P. 2018. “Predicting Crime on the Streets of Philadelphia.” University of Pennsylvania. https://www.seas.upenn.edu/~tabedzki/machine-learning-report-final.pdf.
[22] Ahishakiye, E. 2017. “Crime Prediction Using Decision Tree (J48) Classification.” International Journal of Computer and Information Technology. https://www.ijcit.com/archives/volume6/issue3/Paper060308.pdf.
[23] Naik, Snehal. 2019. “Predicting Arrests: Looking into Chicago’s Crime through Machine Learning.” Medium. Last modified September 19. https://medium.com/analytics-vidhya/predicting-arrests-looking-into-chicagos-crime-through-machine-learning-78697cc930b9.
[24] Wood, Matt. 2022. “Algorithm Predicts Crime a Week in Advance, but Reveals Bias in Police Response.” Biological Sciences Division | The University of Chicago, June 30. https://biologicalsciences.uchicago.edu/news/algorithm-predicts-crime-police-bias.
[25] Laqueur and Copus. 2023. “Algorithmic Assessment of Parole Decisions.”
[26] Ho, Jabr, and Zhang. 2023. “AI Enforcement.”
[27] Kovalchuk, Karpinski, Banakh, Kasianchuk, Shevchuk, and Zagorodna. 2023. “Prediction Machine Learning Models.”
[28] Sankin and Mattu. 2023. “Predictive Policing Software Terrible at Predicting Crimes.”
[29] Naik, Snehal. 2019. “Predicting Arrests.”
[30] Khosla, Sauren. n.d. “Recidivism Risk: Algorithmic Prediction and Racial Bias.” Stanford. https://web.stanford.edu/class/archive/cs/cs109/cs109.1204/psets/contest/SaurenKhosla.pdf.
[31] Tabedzki,Thirumalaiswamy, and van Vliet. 2018. “Predicting Crime on the Streets of Philadelphia.”