Abstract

Major complications arise from the recent increase in the amount of high-dimensional data, including high computational costs and memory requirements. Feature selection, which identifies the most relevant and informative attributes of a dataset, has been introduced as a solution to this problem. Most of the existing feature selection methods are computationally inefficient; inefficient algorithms lead to high energy consumption, which is not desirable for devices with limited computational and energy resources. In this paper, a novel and flexible method for unsupervised feature selection is proposed. This method, named QuickSelection (The code is available at: https://github.com/zahraatashgahi/QuickSelection), introduces the strength of the neuron in sparse neural networks as a criterion to measure the feature importance. This criterion, blended with sparsely connected denoising autoencoders trained with the sparse evolutionary training procedure, derives the importance of all input features simultaneously. We implement QuickSelection in a purely sparse manner as opposed to the typical approach of using a binary mask over connections to simulate sparsity. It results in a considerable speed increase and memory reduction. When tested on several benchmark datasets, including five low-dimensional and three high-dimensional datasets, the proposed method is able to achieve the best trade-off of classification and clustering accuracy, running time, and maximum memory usage, among widely used approaches for feature selection. Besides, our proposed method requires the least amount of energy among the state-of-the-art autoencoder-based feature selection methods.

Highlights

  • In the last few years, considerable attention has been paid to the problem of dimensionality reduction and many approaches have been proposed (Van Der Maaten et al, 2009)

  • We introduce for the first time sparse training in the world of denoising autoencoders, and we named the newly introduced model sparse denoising autoencoder

  • To derive clustering accuracy (Li et al, 2018), first, we perform K-means using the subset of the dataset corresponding to the selected features and get the cluster labels

Read more

Summary

Introduction

In the last few years, considerable attention has been paid to the problem of dimensionality reduction and many approaches have been proposed (Van Der Maaten et al, 2009). Feature extraction focuses on transforming the data into a lower-dimensional space. This transformation is done through a mapping which results in a new set of features (Liu and Motoda, 1998). Feature selection reduces the feature space by selecting a subset of the original attributes without generating new features (Chandrashekar & Sahin, 2014). Based on the availability of the labels, feature selection methods are divided into three categories: supervised (Ang et al, 2015; Chandrashekar & Sahin, 2014), semi-supervised (Sheikhpour et al, 2017; Zhao & Liu, 2007), and unsupervised (Dy and Brodley, 2004; Miao & Niu, 2016). Unsupervised feature selection is considered as a much harder problem (Dy & Brodley, 2004)

Objectives
Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.