Abstract

Visible-infrared person re-identification (VI-ReID) is a challenging task in computer vision due to the substantial modality gaps between visible and infrared images. The currently existing approaches can improve performance by addressing cross-modality discrepancies, but they often fail to generate compensation features that fully utilize the unique information present in each modality. Additionally, these methods mainly focus on pixel-level fusion of images, disregarding the challenge of modality misalignment. To address these issues, we propose a novel visible-infrared person re-identification method that explores modality enhancement and compensation spaces to extract more discriminative modality information. Furthermore, we introduce a modality mutual guidance strategy incorporating identity information mutual learning loss and modality-guided alignment loss, which can effectively leverage learned identity-related feature to guide alignment between visible and infrared modalities. Extensive experiments on public datasets demonstrate the significant superiority of our proposed method over existing state-of-the-art approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.