Abstract

“Matching” procedures in statistics involve construction of datasets with similar covariates between compared groups. Matching has recently been proposed as a means of addressing fairness impossibility (i.e. inconsistency of fairness metrics) in AI and ML systems: Beigang argues on conceptual grounds that, when matched rather than unmatched datasets are analyzed, the tradeoff between the fairness metrics equalized odds (EO) and positive predictive value (PPV) will be reduced. Here we evaluate matching as a practical rather than merely conceptual approach to reducing fairness impossibility. As a case study we conduct pre-match and post-match analyses on the well-known COMPAS dataset from Broward Co., Florida, 2013-2014. We then reflect on what these results suggest about effects of matching on (a) accuracy estimates, (b) fairness estimates, and (c) difference between fairness estimates – that is, the extent to which matching reduces “fairness impossibility” in practice. We conclude that matching is a promising tool for improving evaluations on all three fronts, but faces problems due to potential biases introduced by matching procedures themselves, as well as limited power under conditions extremely common to ML evaluation contexts such as non-independent variables and relevance of hidden variables.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.