Abstract
In feature learning field, many methods are inspired by advances in neuroscience. Among them, neural network and sparse coding have been broadly studied. Predictive sparse decomposition (PSD) is a practical variant of these two methods. It trains a neural network to estimate the sparse codes. After training, the neural network is fine-tuned to achieve higher performance on object recognition tasks. It is widely believed that introducing discriminative information can make the features more useful for classification task. Hence, in this work, we propose applying the task-driven dictionary learning framework to the PSD and demonstrate that this new model can be optimized by the stochastic gradient descent (SGD) algorithm. Before our work, the semi-supervised auto-encoder framework has already been proposed to guide neural network to extract discriminative representations. But it does not improve the classification performance of neural network. In the experiments, we compare the proposed method with the semi-supervised auto-encoder method. The performance of PSD is used as the baseline for these two methods. On the MNIST and USPS datasets, our method can generate more discriminative and predictable sparse codes than other methods. Furthermore, the recognition accuracy of neural network can be improved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.