Abstract
Single sample per person face recognition influenced by varying illumination is a tricky issue. Conventional techniques for illumination-invariant face recognition either realize illumination normalization on the whole face, or learn the illumination-invariant representation from the face image. This paper holds the opinion that deep learning method, which is more similar to the behavior of primate brain, can leverage the advantages of both the conventional techniques. Motivated by the success of generative adversarial network in image representation, this paper proposes IL-GAN model based on the basic structures of variational auto-encoder and generative adversarial network, generating the Controlled Illumination-level Face Image while preserves identity character as well performing a powerful latent representation from the face image, which encodes illumination-invariant signatures. Moreover, this model can be adopted in single sample per person face recognition. Meanwhile, this research proposes an novel illumination level estimation method based on singular value decomposition to generate the Controlled Illumination-level Face Image optionally. Finally, the performances of the proposed method and other state-of-the-art techniques are verified on the Extended Yale B, CMU PIE, IJB-A and our Self-built Driver Face databases. The experimental results indicate that the IL-GAN model outperforms previous approaches for single sample per person face recognition under varying illumination.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have