Abstract

The algorithmic fairness of predictive analytic tools in the public sector has increasingly become a topic of rigorous exploration. While instruments pertaining to criminal recidivism and academic admissions, for example, have garnered much attention, the predictive instruments of Child Welfare jurisdictions have received considerably less attention. This is in part because comparatively few such instruments exist and because even fewer have been scrutinized through the lens of algorithmic fairness. In this work, we address both gaps. First, a novel classification algorithm for predicting reunification success within Oregon Child Welfare is presented. The purpose of this tool is to maximize the number of stable reunifications and identify potentially unstable reunifications which may require additional resources and scrutiny. Additionally, because the algorithmic fairness of the developed tool, if left unaltered, is unquestionably lacking, the utilized procedure for mitigating such unfairness is presented, along with the nuanced rationale behind each complex and unavoidable choice. This procedure, though similar to other post-processing group-specific thresholding methods, is novel in its use of a penalized optimizer and contextually requisite subsampling. These novel methodological components yield a rich and informative empirical understanding of the trade-off continuum between fairness and accuracy. As the developed procedure is generalizable across a variety of group-level definitions of algorithmic fairness, as well as across an arbitrary number of protected attribute levels and risk thresholds, this approach presents the opportunity to critically and broadly influence the equity and fairness implications of a Child Welfare jurisdiction’s automated decision processes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call