Abstract

This work proposes a new formulation for supervised stacked autoencoder. We argue that features from the same class should be similar to each other and hence linearly dependent. This means that, when stacked as columns, the feature matrix for each class will be rank deficient (low-rank). We impose this constraint into the stacked autoencoder formulation in the form of nuclear norm penalties on class-wise feature matrices at each level. The nuclear norm penalty is the convex surrogate of rank, and promotes a low-rank solution as desired by our proposal. Owing to the nuclear norm penalties, our formulation is non-smooth; hence cannot be solved using gradient descent based techniques like backpropagation directly. Moreover we learn the stacked autoencoder in one go, without the usual pre-training followed by fine-tuning regime. Both the ends (non-smooth cost function and single stage training for all the layers simultaneously) are met by employing the variable splitting followed by augmented Lagrangian method of alternating directions. Two sets of experiments have been carried out. The first set is on a variety of benchmark datasets. Our method excels over other deep learning models compared against—class sparse stacked autoencoder, deep belief network and discriminative deep belief network. The second experiment is on the brain computer classification problem; we find that our method outperforms prior deep learning based solutions utilized for this task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call