Abstract

Background: the credit scoring model is an effective tool for banks and other financial institutions to distinguish potential default borrowers. The credit scoring model represented by machine learning methods such as deep learning performs well in terms of the accuracy of default discrimination, but the model itself also has many shortcomings such as many hyperparameters and large dependence on big data. There is still a lot of room to improve its interpretability and robustness. Methods: the deep forest or multi-Grained Cascade Forest (gcForest) is a decision tree depth model based on the random forest algorithm. Using multidimensional scanning and cascading processing, gcForest can effectively identify and process high-dimensional feature information. At the same time, gcForest has fewer hyperparameters and has strong robustness. So, this paper constructs a two-stage hybrid default discrimination model based on multiple feature selection methods and gcForest algorithm, and at the same time, it optimizes the parameters for the lowest type II error as the first principle, and the highest AUC and accuracy as the second and third principles. GcForest can not only reflect the advantages of traditional statistical models in terms of interpretability and robustness but also take into account the advantages of deep learning models in terms of accuracy. Results: the validity of the hybrid default discrimination model is verified by three real open credit data sets of Australian, Japanese, and German in the UCI database. Conclusions: the performance of the gcForest is better than the current popular single classifiers such as ANN, and the common ensemble classifiers such as LightGBM, and CNNs in type II error, AUC, and accuracy. Besides, in comparison with other similar research results, the robustness and effectiveness of this model are further verified.

Highlights

  • In recent years, research on the default discriminant model has received extensive attention from researchers and financial institutions

  • The feature selection methods used at this stage are as follows: (1) Full-variable Logistic regression; (2) Stepwise regression based on AIC criterion; (3) Stepwise regression based on BIC criterion; (4) Lasso-logistic regression; (5) Elastic Net Logistic regression

  • After data preprocessing in the first stage (Section 4.2), five feature selection algorithms are applied, and the results of feature selection are evaluated according to type II error, AUC, and accuracy of Logistic regression

Read more

Summary

Introduction

Research on the default discriminant model has received extensive attention from researchers and financial institutions. To make up for the shortcomings of the above research and improve the interpretability, classification performance, and robustness of the credit scoring model, this paper establishes a new two-stage hybrid model combining multiple feature selection methods and gcForest. This model considers the differences and complementarities between traditional statistical models and artificial intelligence models and combines the two to complement each other. Zhou et al (2017) proposed a new tree-based ensemble method, gcForest, and proved that it has highly competitive performance with deep neural networks (DNNs) in a wide range of tasks.

Feature Selection
Application of Deep Learning Model in Credit Scoring
Construction of Hybrid Default Discriminant Model Based on GcForest
Experimental Data Set
Data Preprocessing
Evaluation Indicators
Analysis of Feature Selection Results
Analysis on the Results of Ddefault Discrimination
Evaluation Inditcator
Comparison with Other Studies
Conclusions
Methods
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call