Abstract
A recent wave of research has attempted to define fairness quantitatively. In particular, this work has explored what fairness might mean in the context of decisions based on the predictions of statistical and machine learning models. The rapid growth of this new field has led to wildly inconsistent motivations, terminology, and notation, presenting a serious challenge for cataloging and comparing definitions. This article attempts to bring much-needed order. First, we explicate the various choices and assumptions made—often implicitly—to justify the use of prediction-based decision-making. Next, we show how such choices and assumptions can raise fairness concerns and we present a notationally consistent catalog of fairness definitions from the literature. In doing so, we offer a concise reference for thinking through the choices, assumptions, and fairness considerations of prediction-based decision-making.
Highlights
Prediction-based decision-making has swept through industry and is quickly making its way into government. These techniques are already common in lending (Hardt et al 2016, Liu et al 2018, Fuster et al 2020), hiring (Miller 2015a,b; Hu & Chen 2018a), and online advertising (Sweeney 2013), and they increasingly figure into decisions regarding pretrial detention (Angwin et al 2016, Dieterich et al 2016, Larson et al 2016), immigration detention (Koulish 2016), child maltreatment screening (Vaithianathan et al 2013, Chouldechova et al 2018, Eubanks 2018), public health
Attention has focused on how consequential predictive models may be biased—a overloaded word that, in popular media, has come to mean that the model’s performance unjustifiably differs along social axes such as race, gender, and class
Uncovering and rectifying such biases in statistical and machine learning models has motivated a field of research we call algorithmic fairness
Summary
Prediction-based decision-making has swept through industry and is quickly making its way into government These techniques are already common in lending (Hardt et al 2016, Liu et al 2018, Fuster et al 2020), hiring (Miller 2015a,b; Hu & Chen 2018a), and online advertising (Sweeney 2013), and they increasingly figure into decisions regarding pretrial detention (Angwin et al 2016, Dieterich et al 2016, Larson et al 2016), immigration detention (Koulish 2016), child maltreatment screening (Vaithianathan et al 2013, Chouldechova et al 2018, Eubanks 2018), public health Attention has focused on how consequential predictive models may be biased—a overloaded word that, in popular media, has come to mean that the model’s performance ( defined) unjustifiably differs along social axes such as race, gender, and class. Uncovering and rectifying such biases in statistical and machine learning models has motivated a field of research we call algorithmic fairness
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.