Abstract

A stacked autoencoder (SAE) is a widely used deep network. However, existing deep SAEs focus on original samples without considering the hierarchical structural information between samples. This limits the accuracy of the SAE. In recent years, state-of-the-art SAEs have suggested improvements in network structure, cost function, parameter optimization, and thereby the accuracy has been enhanced. However, the problem mentioned above is still not solved. Therefore, this paper is concerned with how to design a SAE that can conduct deep learning on hierarchically structured samples. This proposed SAE - neighboring envelope embedded stacked autoencoder (NE_ESAE) mainly consists of two parts. The first is the neighboring sample envelope learning mechanism (NSELM) that constructs sample-pairs by combining neighboring samples. In addition, the NSELM constructs multilayer sample spaces by multilayer iterative mean clustering, which considers similar samples and generates layers of envelope samples with hierarchical structural information. The second is an embedded stacked autoencoder (ESAE) to consider the original samples during training and in network structure, thereby finding the relationship of the samples with original features and deep features in a better manner. The experimental results show that our method has significantly better performance than some representative methods. Different from existing SAEs, the proposed NE_ESAE realizes deep learning on hierarchical structured samples, and makes SAE able to conduct cooperative deep sample and feature learning. The advantage that has been gained can be applied to other deep neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call