Abstract

Face recognition with single sample per person (SSPP) is a very challenging task because in such a scenario it is difficult to predict the facial variations of a query sample by the gallery samples. Considering the fact that different parts of human faces have different importance to face recognition, and the fact that the intra-class facial variations can be shared across different subjects, we propose a local generic representation (LGR) based framework for face recognition with SSPP. A local gallery dictionary is built by extracting the neighboring patches from the gallery dataset, while an intra-class variation dictionary is built by using an external generic dataset to predict the possible facial variations (e.g., illuminations, pose, expressions and disguises). LGR minimizes the total representation residual of the query sample over the local gallery dictionary and the generic variation dictionary, and it uses correntropy to measure the representation residual of each patch. Half-quadratic analysis is adopted to solve the optimization problem. LGR takes the advantages of patch based local representation and generic variation representation, showing leading performance in face recognition with SSPP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call