Abstract

In feature learning field, many methods are inspired by advances in neuroscience. Among them, neural network and sparse coding have been broadly studied. Predictive sparse decomposition (PSD) is a practical variant of these two methods. It trains a neural network to estimate the sparse codes. After training, the neural network is fine-tuned to achieve higher performance on object recognition tasks. It is widely believed that introducing discriminative information can make the features more useful for classification task. Hence, in this work, we propose applying the task-driven dictionary learning framework to the PSD and demonstrate that this new model can be optimized by the stochastic gradient descent (SGD) algorithm. Before our work, the semi-supervised auto-encoder framework has already been proposed to guide neural network to extract discriminative representations. But it does not improve the classification performance of neural network. In the experiments, we compare the proposed method with the semi-supervised auto-encoder method. The performance of PSD is used as the baseline for these two methods. On the MNIST and USPS datasets, our method can generate more discriminative and predictable sparse codes than other methods. Furthermore, the recognition accuracy of neural network can be improved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call