Abstract

In the last few years, ensemble learning has received more interest primarily for the task of classification. It is based on the postulation that combining the output of multiple experts is better than the output of any individual expert. Ensemble feature selection may improve the performance of the learning algorithms and has the ability to obtain more stable and robust results. However, during the process of feature aggregation and selection, selected feature subset may contain high levels of inter-feature redundancy. To address this issue, a novel algorithm based on feature rank aggregation and graph theoretic technique for ensemble feature selection (R-GEFS) with the fusion of Pearson and Spearman correlation metrics is proposed. The method works by aggregation of the profile of preferences of five feature rankers as the base feature selectors. Then similar features are grouped into clusters using graph theoretic approach. The most representative feature strongly co-related to target decision classes is drawn from each cluster. The efficiency and effectiveness of the R-GEFS algorithm are evaluated through an empirical study. Extensive experiments on 15 diverse benchmark datasets are carried out to compare R-GEFS with seven state-of-the-art feature selection models with respect to four popular classifiers, namely decision tree, k nearest neighbor, random forest, and support vector machine. The proposed method turns out to be effective by selecting smaller feature subsets with lesser computational complexities and it assists in increasing the classification accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call