Abstract

Feature selection has become an indispensable preprocessing step in data mining problems as high amount of data become prevalent with the advances in technology. The objective of feature selection is twofold: reducing data amount and improving learning performance. In this study, we leverage the multi-core nature of a regular PC to build a robust framework for feature selection. This framework executes the feature selection algorithm on four processors, in parallel. As per the No Free Lunch Theorem, we facilitate 40 different execution settings for the processors by employing two multiobjective selection algorithms, four initial population generation methods, and five machine learning techniques. Besides, we introduce six setting selection schemes to decide the most fruitful setting for each processor. We carry out extensive experiments on 11 UCI benchmark datasets and analyze the results with statistical tests. Finally, we compare our proposed method with state-of-the-art studies and record remarkable improvement in terms of maximum accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call