Abstract

Compared with traditional visible–visible person re-identification, the modality discrepancy between visible and infrared images makes person re-identification more challenging. Existing methods rely on learning efficient transformation mechanisms in paired images to reduce the modality gap, which inevitably introduces noise. To get rid of these limitations, we propose a Hierarchical Cross-modal shared Feature Network (HCFN) to mine modality-shared and modality-specific information. Since infrared images lack color and other information, we construct an Intra-modal Feature Extraction Module (IFEM) to learn the content information and reduce the difference between visible and infrared images. In order to reduce the heterogeneous division, we apply a Cross-modal Graph Interaction Module (CGIM) to align and narrow the set-level distance of the inter-modal images. By jointly learning two modules, our method can achieve 66.44% Rank-1 on SYSU-MM01 dataset and 74.81% Rank-1 on RegDB datasets, respectively, which is superior compared with the state-of-the-art methods. In addition, ablation experiments demonstrate that HCFN is at least 4.9% better than the baseline network.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.