Abstract
In respect to racial discrimination in lending, we introduce global Shapley value and Shapley–Lorenz explainable AI methods to attain algorithmic justice. Using 157,269 loan applications during 2017 in New York, we confirm that these methods, consistent with the parameters of a logistic regression model, reveal prima facie evidence of racial discrimination. We show, critically, that these explainable AI methods can enable a financial institution to select an opaque creditworthiness model which blends out-of-sample performance with ethical considerations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have