Low-quality palmprint images will degrade the recognition performance, when they are captured under the open, unconstraint, and low-illumination conditions. Moreover, the traditional single-view palmprint representation methods have been difficult to express the characteristics of each palm strongly, where the palmprint characteristics become weak. To tackle these issues, in this article, we propose a structure suture learning-based robust multiview palmprint recognition method (SSL_RMPR), which comprehensively presents the salient palmprint features from multiple views. Unlike the existing multiview palmprint representation methods, SSL_RMPR introduces a structure suture learning strategy to produce an elastic nearest neighbor graph (ENNG) on the reconstruction errors that simultaneously exploit the label information and the latent consensus structure of the multiview data, such that the discriminant palmprint representation can be adaptively enhanced. Meanwhile, a low-rank reconstruction term integrating with the projection matrix learning is proposed, in such a manner that the robustness of the projection matrix can be improved. Particularly, since no extra structure capture term is imposed into the proposed model, the complexity of the model can be greatly reduced. Experimental results have proven the superiority of the proposed SSL_RMPR by achieving the best recognition performances on a number of real-world palmprint databases.
Read full abstract