Abstract

Traditional biometrics identification performs matching between probe and gallery that may involve the same single or multiple biometric modalities. This paper presents a cross-matching scenario where the probe and gallery are from two distinctive biometrics, i.e., face and periocular, coined as face-periocular cross-identification (FPCI). We propose a novel contrastive loss tailored for face-periocular cross-matching to learn a joint embedding, which can be used as a gallery or a probe regardless of the biometric modality. On the other hand, a hybrid attention vision transformer is devised. The hybrid attention module performs depth-wise convolution and conv-based multi-head self-attention in parallel to aggregate global and local features of the face and periocular biometrics. Extensive experiments on three benchmark datasets demonstrate that our model sufficiently improves the performance of FPCI. Besides that, a new face-periocular dataset in the wild, the Cross-modal Face-periocular dataset, is developed for the FPCI models training.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.