Abstract

In this paper, illuminated by the great success of universal background modeling (UBM) for speech/speaker recognition, we present a new algorithm for face recognition. On the one hand, we encode each face image as an ensemble of X-Y patches, which integrate both local appearance and shape information. These X-Y patch representation provides the possibility to compare two spatially different patches, and consequently alleviates the requirement of exact pixel- wise alignment. On the other hand, we train the UBM based on the X-Y patches from the images of different subjects, and then automatically adapt the UBM for specific subject, and finally face recognition is conducted by comparing the ratio of the likelihoods from the model for specific subject and UBM. UBM elicits the algorithmic robustness to image occlusion since the occluded patches may not contribute evidence to any subjects. Comparison experiments with the state-of-the-art subspace learning algorithms, on the popular CMU PIE face database and with varieties of configurations, demonstrate that our proposed algorithm brings significant improvement in face recognition accuracy, and also show the algorithmic robustness to image occlusions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call