Abstract

With recent success of deep neural networks, stacked autoencoder networks have received a lot of attention for robust unsupervised representation learning. However, recent autoencoder methods cannot make full use of multi-view information and thus fail to further improve many real-world applications by exploring the geometric structures of multi-view data. To address the above-mentioned issue, we introduce hierarchical graph augmented stacked autoencoders (HGSAE) for unsupervised multi-view representation learning. Specifically, a hierarchical graph structure is first adapted to stacked autoencoders to learn view-specific representations, aiming to preserve the geometric information of multi-view data through local and non-local graph regularizations. A general or common representation can then be learned by reconstructing each single view using fully connected neural networks. By doing this, the proposed method not only preserves the geometric information in multi-view data but also automatically balances the complementarity/consistency among different views. Extensive experiments on six popular unsupervised representation learning datasets demonstrate the effectiveness of our method when compared with recent state-of-the-art autoencoder methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call