Background: An improved understanding of the reasoning behind model decisions can enhance the use of machine learning (ML) models in credit scoring. Although ML models are widely regarded as highly accurate, the use of these models in settings that require explanation of model decisions has been limited because of the lack of transparency. Especially in the banking sector, model risk frameworks frequently require a significant level of model interpretability.Aim: The aim of the article is to evaluate traditional model risk frameworks to determine their appropriateness when validating ML models in credit scoring and enhance the use of ML models in regulated environments by introducing a ML interpretability technique in model validation frameworks.Setting: The research considers model risk frameworks and regulatory guidelines from various international institutions.Method: The research is qualitative in nature and shows how through integrating traditional and non-traditional model risk frameworks, the practitioner can leverage trusted techniques and extend traditional frameworks to address key principles such as transparency.Results: The article proposes a model risk framework that utilises Shapley values to improve the explainability of ML models in credit scoring. Practical validation tests are proposed to enable transparency of model input variables in the validation process of ML models.Conclusion: Our results show that one can formulate a comprehensive validation process by integrating traditional and non-traditional frameworks.Contribution: This study contributes to existing model risk literature by proposing a new model validation framework that utilises Shapley values to explain ML model predictions in credit scoring.
Read full abstract