Abstract

Representation-based methods have achieved exciting results in recent applications of face recognition. However, it is still challenging for the face recognition task due to noise and outliers in the data. Many existing methods avoid these problems by constructing an auxiliary dictionary from the extended data but fail to achieve good performances because they use the main dictionary only for classification. In this paper, to avoid the need to manually construct an auxiliary dictionary and the effects of noise, we propose a Joint Latent Low-Rank and Non-Negative Induced Sparse Representation (JLSRC) for face recognition. Specifically, JLSRC adaptively learns two clean low-rank reconstructed dictionaries jointly via an extended latent low-rank representation to reveal the potential relationships in the data and then embeds a non-negative constraint and an Elastic Net regularization in the coefficient vectors of the dictionaries to enhance the performance on classification. In this way, the learned low-rank dictionaries can be mutually boosted to extract discriminative features and handle the noise, and the obtained coefficient vectors are simultaneously both sparse and discriminative. Moreover, the proposed method seamlessly and elegantly integrates low-rank learning and sparse representation-based classification. Extensive experiments on three challenging face databases demonstrate the effectiveness and robustness of JLSRC in comparison with the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call