Abstract

Visible-near infrared (VIS-NIR) face matching is a challenging issue in heterogeneous face recognition due to the large spectrum domain discrepancy as well as the over-fitting on insufficient pairwise VIS and NIR images during training. This paper proposes a coupled adversarial learning (CAL) approach for the VIS-NIR face matching by performing adversarial learning on both image and feature levels. On the image level, we learn a transformation network from unpaired NIR-VIS images to transform a NIR image to VIS domain. Cycle loss, global intensity loss and local texture loss are employed to better capture the discrepancy between NIR and VIS domains. The synthesized NIR or VIS images can be further used to alleviate the over-fitting problem in a semi-supervised way. On the feature level, we seek a shared feature space in which the heterogeneous face matching problem can be approximately treated as a homogeneous face matching problem. An adversarial loss and an orthogonal constraint are employed to reduce the spectrum domain discrepancy and the over-fitting problem, respectively. Experimental results show that CAL not only synthesizes high-quality VIS or NIR images, but also obtains state-of-the-art recognition results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call