Abstract

In order to exploit the potential intrinsic low-dimensional structure of the high-dimensional data from the manifold learning perspective, we propose a global graph embedding with globality-preserving property, which requires that samples should be mapped close to their low-dimensional class representation data distribution centers in the embedding space. Then we propose a novel local and global graph embedding auto-encoder(LGAE) to capture the geometric structure of data, its cost function have three terms, a reconstruction loss to reproduce the input data based on the learned representation, a local graph embedding regularization to enforce mapping the neighboring samples close together in the embedding space, a global embedding regularization to enforce mapping samples close to their low-dimensional class representation distribution centers. Thus in the learning process, our LGAE can map samples from same class close together in the embedding space, as well as reduce the scatter within-class and increase the margin between-class, it will also detect the local and global intrinsic geometric structure of data and discover the latent discriminant information in the embedding space. We build stacked LGAE for classification tasks and conduct comprehensive experiments on several benchmark datasets, the results confirm that our proposed framework can learn discriminative representation, speed up the network convergence process, and significantly improve the classification performance.

Highlights

  • The past decades have witnessed the great development of deep learning algorithms, the main idea of the deep learning methods is to automatically learn high-level abstractions of data by using deep architectures composed of multiple nonlinear transformations

  • We proposed a global graph embedding to project samples close to their low-dimensional class representation distribution centers in the embedding space

  • 2) COMPARISON METHODS In order to evaluate the effectiveness of our SLGAE, we provide performance comparisons with many state-of-the-art methods, they are: 1) Stacked auto-encoder (SAE) [1]; 2) Stacked sparse auto-encoder (SSpAE) [2] with KL divergence regularization, the sparsity parameter ρ is set as 0.05, and sparsity penalty coefficient β = 0.1; 3) Stacked denoising auto-encoder (SDAE) [3]

Read more

Summary

Introduction

The past decades have witnessed the great development of deep learning algorithms, the main idea of the deep learning methods is to automatically learn high-level abstractions of data by using deep architectures composed of multiple nonlinear transformations. As one of the most representative deep learning approaches, auto-encoder [1]–[4] is utilized for learning representations by minimizing the reconstruction error between the input and the reconstructed output. It consists of an encoder that projects the input to a representation layer, and a decoder that maps the representation back to reconstruct the input. Autoencoder learns a simple identity function to extract some underlying explanatory factor of data. Denoising auto-encoder (DAE) [3] intentionally trained a network that can reconstruct the clean input from the corrupted version of it, which avoids simple identity mapping and discover more robust representation under different types of noises. Contractive auto-encoder (CAE) [4] added a Frobenius norm

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.