Abstract

In this paper, we propose a method to exploit the uniqueness in the illumination variations on the face image of a subject for face verification. Using the 3D wireframe model of a face, illumination variations are synthetically generated by rendering it with texture to produce virtual face images. When these virtual and a set of real face images are transformed in eigenspace, they form two separate clusters for the virtual and real-world faces. In addition, the cluster corresponding to set of virtual faces for any subject is more compact compared to real face image cluster. Therefore, we take the virtual face cluster as the reference and find a transformation that takes real face features closer to the reference virtual face cluster. In this paper, we propose subject-specific relighting transformations that relight the real face feature in eigenspace into a more compact virtual face feature cluster. This transformation is computed and stored during training. During testing, subject-specific transformations are applied on the eigen-feature of the real face image before computing the distance from the reference cluster of the claimed subject. We report verification results on frontal face images with various lighting directions of all 68 subjects of the PIE database, and show using receiver operating characteristic (ROC) curves and equal error rates(EER) that the proposed subject-specific eigen-relighting gives significantly better face verification performance when compared with a baseline system without eigen-relighting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call