Abstract

Algorithms play an essential and expanding role in public policy decisions, including those in criminal justice. This short paper reports on the first author’s summer research project characterizing the tradeoff between accuracy and fairness in parole decision predictions. The dataset employed in this study contains over 30,000 parole decisions made by the New York State Division of Criminal Justice Services. Each decision contains information on the subject, such as sex, race/ethnicity, and parole decision, as well as predictive features describing the crime committed by the subject and the parole interview held. Logistic regression, decision tree, support vector machine, and random forest models are trained and utilized to analyze parole decision predictions based on the available features. Most models fail to pass standard fairness tests for most fairness metrics. Moreover, while there may be an overall tradeoff between fairness and accuracy, the obtained differences in accuracy are too small to make a well-supported claim. Future research may enhance the preliminary work introduced in this paper by using multiple real-world datasets to investigate the tradeoff between accuracy and fairness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call