Abstract

Fairness is, along with explainability, a key requirement to assess the trustworthiness of Artificial Intelligence methods that are employed for credit rating. While explainability is concerned with the capability to identify the main drivers of credit ratings, fairness evaluates how similar are credit ratings assigned to different population groups. Although explainability and fairness are strictly connected, there are no research papers that consider them jointly. We aim to fill this gap.To this purpose, we propose a general post-processing methodology for credit ratings based on machine learning models, which measures explainability, by means of the Shapley–Lorenz values of the explanatory variables, and fairness, by comparing the Shapley–Lorenz values calculated in each population group, conditionally on the explanatory variables.We experiment our approach on a credit rating model applied to a panel dataset of 119,857 credit records for approximately 20,000 small and medium-sized enterprises (SMEs), in four European countries and eleven industry sectors for the period 2015 to 2020. The obtained results indicate which are the most explainable variables and show that the credit rating model is fair, across countries and industrial sectors. However, when subsamples of smaller size are considered, the model is not fair, indicating that fairness depends on the available data.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.