Abstract

Abstract Feature selection (FS) is a technique which helps to find the most optimal feature subset to develop an efficient pattern recognition model under consideration. The use of genetic algorithm (GA) and particle swarm optimization (PSO) in the field of FS is profound. In this paper, we propose an insightful way to perform FS by amassing information from the candidate solutions produced by GA and PSO. Our aim is to combine the exploitation ability of GA with the exploration capacity of PSO. We name this new model as binary genetic swarm optimization (BGSO). The proposed method initially lets GA and PSO to run independently. To extract sufficient information from the feature subsets obtained by those, BGSO combines their results by an algorithm called average weighted combination method to produce an intermediate solution. Thereafter, a local search called sequential one-point flipping is applied to refine the intermediate solution further in order to generate the final solution. BGSO is applied on 20 popular UCI datasets. The results were obtained by two classifiers, namely, k nearest neighbors (KNN) and multi-layer perceptron (MLP). The overall results and comparisons show that the proposed method outperforms the constituent algorithms in 16 and 14 datasets using KNN and MLP, respectively, whereas among the constituent algorithms, GA is able to achieve the best classification accuracy for 2 and 7 datasets and PSO achieves best accuracy for 2 and 4 datasets, respectively, for the same set of classifiers. This proves the applicability and usefulness of the method in the domain of FS.

Highlights

  • Every object in real life has certain features, the unique entities which define its characteristics

  • We propose an insightful way to perform Feature selection (FS) by amassing information from the candidate solutions produced by genetic algorithm (GA) and particle swarm optimization (PSO)

  • The performance of the proposed model is tabulated against the performance of GA, PSO and histogram-based multi-objective GA (HMOGA)

Read more

Summary

Introduction

Every object in real life has certain features, the unique entities which define its characteristics. For identifying the patterns distinctively, researchers have been relying on various feature extraction techniques. Many such features are heuristically chosen based on domain understanding and/or inherent properties of the object such as statistical, morphological and so on. The extracted features are not always capable of predicting the pattern classes with absolute accuracy. There is the case where features being uncorrelated to the pattern class to be predicted, i.e. the features are not useful enough to represent the pattern classes properly.

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call