Abstract

There has been a growing awareness of bias in machine learning and a proliferation of different notions of fairness. While formal definitions of fairness outline different ways fairness might be computed, some notions of fairness do not provide guidance on implementation of machine learning in practice. In juvenile justice settings in particular, computational solutions to fairness often lead to ethical quandaries. Achieving algorithmic fairness in a setting that has long roots in structural racism, with data that reflects those in-equalities, may not be possible. And with different racial groups experiencing different rates of key outcomes (like a new disposition) at markedly different rates, it is difficult for any machine learning model to produce similar accuracy, false positive rates, and false negative rates. These ideas are tested with data from a large, urban county in the Midwest United States to examine how different models and different cutoffs combine to show the possibilities and limits of achieving machine learning fairness in an applied justice setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call