Abstract

Dimensionality reduction is commonly used to preprocess high-dimensional data, which is an essential step in machine learning and data mining. An outstanding low-dimensional feature can improve the efficiency of subsequent learning tasks. However, existing methods of dimensionality reduction mostly involve datasets with sufficient labels and fail to achieve effective feature vectors for datasets with insufficient labels. In this paper, an unsupervised multiple layered sparse autoencoder model is studied. Its advantage is that it reduces the reconstruction error as its optimization goal, with the resulting low-dimensional feature being reconstructed to the original dataset as much as possible. Therefore, the reduction of high-dimensional datasets to low-dimensional datasets is effective. First, the relationship among the reconstructed data, the number of iterations, and the number of hidden variables is explored. Second, the dimensionality reduction ability of the sparse autoencoder is proven. Several classical feature representation methods are compared with the sparse autoencoder on publicly available datasets, and the corresponding low-dimensional representations are placed into different supervised classifiers and the classification performances reported. Finally, by adjusting the parameters that might influence the classification performance, the parametric sensitivity of the sparse autoencoder is shown. The extensively low-dimensional feature classification experimental results demonstrated that the sparse autoencoder is more efficient and reliable than the other selected classical dimensional reduction algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call