Abstract

Handling intra-personal variation is a major challenge in face recognition. It is difficult how to appropriately measure the similarity between human faces under significantly different settings (e.g., pose, illumination, and expression). In this paper, we propose a new model, called “Associate-Predict” (AP) model, to address this issue. The associate-predict model is built on an extra generic identity data set, in which each identity contains multiple images with large intra-personal variation. When considering two faces under significantly different settings (e.g., non-frontal and frontal), we first “associate” one input face with alike identities from the generic identity date set. Using the associated faces, we generatively “predict” the appearance of one input face under the setting of another input face, or discriminatively “predict” the likelihood whether two input faces are from the same person or not. We call the two proposed prediction methods as “appearance-prediction” and “likelihood-prediction”. By leveraging an extra data set (“memory”) and the “associate-predict” model, the intra-personal variation can be effectively handled. To improve the generalization ability of our model, we further add a switching mechanism — we directly compare the appearances of two faces if they have close intra-personal settings; otherwise, we use the associate-predict model for the recognition. Experiments on two public face benchmarks (Multi-PIE and LFW) demonstrated that our final model can substantially improve the performance of most existing face recognition methods

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.