Abstract

Electronic nose is a kind of widely-used artificial olfactory system for the detection and classification of volatile organic compounds. The high dimensionality of data collected by electronic noses can hinder the process of pattern recognition. Thus, the feature selection is an essential stage in building a robust and accurate model for gas recognition. This paper proposed an improved grey wolf optimizer (GWO) based algorithm for feature selection and applied it on electronic nose data for the first time. Two mechanisms are employed for the proposed algorithm. The first mechanism contains two novel binary transform approaches, which are used for searching feature subset from electronic nose data that maximizing the classification accuracy while minimizing the number of features. The second mechanism is based on the adaptive restart approach, which attempts to further enhance the search capability and stability of the algorithm. The proposed algorithm is compared with five efficient feature selection algorithms on three electronic nose data sets. Three classifiers and multiple assessment indicators are used to evaluate the performance of algorithm. The experimental results show that the proposed algorithm can effectively select the feature subsets that are conducive to gas recognition, which can improve the performance of the electronic nose.

Highlights

  • Feature selection is an important technique in the applications of pattern recognition

  • K-nearest neighbor (KNN) is a classifier with few parameters and high classification accuracy, so it was used as a wrapper method of all meta-heuristic algorithms in this study; the experimental results show that the classification achieves the best performance when k is 5

  • This paper proposes a novel method for enhancing the performance of electronic nose by feature selection using the improved grey wolf optimization based algorithm

Read more

Summary

Introduction

Feature selection is an important technique in the applications of pattern recognition. There are usually too many redundant features in the the data, which will greatly affect the classification accuracy and computational complexity. In order to eliminate the influence of redundant features on classification process, the feature selection plays an important role in reducing the dimension of data, improving the accuracy of the model and helping us have insight into the data more deeply [1]. Feature selection can be roughly divided into three categories: filter, wrapper, and embedded [2]. The filter method sorts the features according to predefined criteria, and the feature selection process is independent of the classification. The selection of variables is integrated into the training process

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call