Abstract

The purpose of heterogeneous face recognition (HFR) is to match face images of the same person from different modalities. Most HFR methods bridge the cross-modality variations with feature alignment by global feature representation learning, but ignore the content information of local features and modality-style information of face image for each modality, which limits the performance for HFR. The content information of local features not only contains the invariance of modality face features, but also can improve the stability of global face features, i.e., local features such as eyes, nose and mouth are steady and invariant. With this motivation, we propose a cross-modality dual-constraint (CMDC) approach that includes the part-facial relational attention network (PRAN) and modality-style attention network (MSAN). First, PRAN is designed to estimate the intrinsic structural relationships of local content features on each modality. It can extract discriminative local face features by capturing correlations within the face space of individual modality, and strengthen representations by contextual relationships across modalities. Secondly, we design the MSAN to capture the modality-style information for each modality, and then reduce the inter-modality differences by minimizing the distance of two modality-style features. Thirdly, to alleviate cross-modality variances and enhance intra-class compactness and inter-class divisibility, we propose the cross-modality dual-constrained loss (DCLoss) in the CMDC approach, which adds a global constraint to each sample distribution in the embedding space. Meanwhile, on the basis of focusing on modality-style information, DCLoss emphasizes the significance of category information. Extensive experiments on four datasets demonstrate the superior performance of our approach over the existing state-of-the-art. The code is available at https://github.com/JianYu777/CMDC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call