Abstract

A key factor contributing to the success of many auto-encoders based deep learning techniques is the implicit consideration of the underlying data manifold in their training criteria. In this paper, we aim to make this consideration more explicit by training auto-encoders completely from the manifold learning perspective. We propose a novel unsupervised manifold learning method termed Laplacian Auto-Encoders (LAEs). Starting from a general regularized function learning framework, LAE regularizes training of auto-encoders so that the learned encoding function has the locality-preserving property for data points on the manifold. By exploiting the analog relation between the graph Laplacian and the Laplace–Beltrami operator on the continuous manifold, we derive discrete approximations of the first- and higher-order auto-encoder regularizers that can be applied in practical scenarios, where only data points sampled from the distribution on the manifold are available. Our proposed LAE has potentially better generalization capability, due to its explicit respect of the underlying data manifold. Extensive experiments on benchmark visual classification datasets show that LAE consistently outperforms alternative auto-encoders recently proposed in deep learning literature, especially when training samples are relatively scarce.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call