Abstract

This article reviews the recent literature on algorithmic fairness, with a particular emphasis on credit scoring. We discuss human versus machine bias, bias measurement, group versus individual fairness, and a collection of fairness metrics. We then apply these metrics to the US mortgage market, analyzing Home Mortgage Disclosure Act data on mortgage applications between 2009 and 2015. We find evidence of group imbalance in the dataset for both gender and (especially) minority status, which can lead to poorer estimation/prediction for female/minority applicants. Loan applicants are handled mostly fairly across both groups and individuals, though we find that some local male (nonminority) neighbors of otherwise similar rejected female (minority) applicants were granted loans, something that warrants further study. Finally, modern machine learning techniques substantially outperform logistic regression (the industry standard), though at the cost of being substantially harder to explain to denied applicants, regulators, or the courts.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.