Abstract

Feature selection is crucial to the credit-scoring process, allowing for the removal of irrelevant variables with low predictive power. Conventional credit-scoring techniques treat this as a separate process wherein features are selected based on improving a single statistical measure, such as accuracy; however, recent research has focused on meaningful business parameters such as profit. More than one factor may be important to the selection process, making multi-objective optimization methods a necessity. However, the comparative performance of multi-objective methods has been known to vary depending on the test problem and specific implementation. This research employed a recent hybrid non-dominated sorting binary Grasshopper Optimization Algorithm and compared its performance on multi-objective feature selection for credit scoring to that of two popular benchmark algorithms in this space. Further comparison is made to determine the impact of changing the profit-maximizing base classifiers on algorithm performance. Experiments demonstrate that, of the base classifiers used, the neural network classifier improved the profit-based measure and minimized the mean number of features in the population the most. Additionally, the NSBGOA algorithm gave relatively smaller hypervolumes and increased computational time across all base classifiers, while giving the highest mean objective values for the solutions. It is clear that the base classifier has a significant impact on the results of multi-objective optimization. Therefore, careful consideration should be made of the base classifier to use in the scenarios.

Highlights

  • Other factors, such as the profitability of the resulting model [4,5], have been the focus of the feature-selection process. These factors, which depend on data and applications, can be incorporated into the feature-selection process as objectives in multi-objective optimizations (MOOs)

  • A comparison was made of the effect of different base classifiers on multi-objective feature-selection methods in credit scoring

  • The base classifier was found to have an uneven impact on the hypervolume of multi-objective optimization output

Read more

Summary

Introduction

MOO algorithms allow designers to balance several, often conflicting, objectives [6]. These methods have been applied to simultaneously consider the number of features and another training objective, such as profit, in feature selection [7]. Several algorithms have been developed to handle MOO problems, including the Strength Pareto Evolutionary Algorithm (SPEA-II), non-dominated sorting genetic algorithm (NSGA-II) [8,9], and its reference-based adaptation for many-objective problems, NSGA-III. Hybrid algorithms, which integrate aspects of two or more optimization methods, have been employed. An example is the adaptation of the continuous Grasshopper Optimization Algorithm (GOA)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.