This article focuses on a new and practical problem in single-sample per person face recognition (SSPP FR), i.e., SSPP FR with a contaminated biometric enrolment database (SSPP-ce FR), where the SSPP-based enrolment database is contaminated by nuisance facial variations in the wild, such as poor lightings, expression change, and disguises (e.g., wearing sunglasses, hat, and scarf). In SSPP-ce FR, the most popular generic learning methods will suffer serious performance degradation because the prototype plus variation (P+V) model used in these methods is no longer suitable in such scenarios. The reasons are twofold. First, the contaminated enrolment samples could yield bad prototypes to represent the persons. Second, the generated variation dictionary is simply based on the subtraction of the average face from generic samples of the same person and cannot well depict the intrapersonal variations. To address the SSPP-ce FR problem, we propose a novel iterative dynamic generic learning (IDGL) method, where the labeled enrolment database and the unlabeled query set are fed into a dynamic label feedback network for learning. Specifically, IDGL first recovers the prototypes for the contaminated enrolment samples via a semisupervised low-rank representation (SSLRR) framework and learns a representative variation dictionary by extracting the "sample-specific" corruptions from an auxiliary generic set. Then, it puts them into the P+V model to estimate labels for query samples. Subsequently, the estimated labels will be used as feedback to modify the SSLRR, thus updating new prototypes for the next round of P+V-based label estimation. With the dynamic learning network, the accuracy of the estimated labels is improved iteratively by virtue of the steadily enhanced prototypes. Experiments on various benchmark face data sets have demonstrated the superiority of IDGL over state-of-the-art counterparts.
Read full abstract