Abstract

Visible-infrared person re-identification (VI-ReID) is a challenging task in computer vision due to the substantial modality gaps between visible and infrared images. The currently existing approaches can improve performance by addressing cross-modality discrepancies, but they often fail to generate compensation features that fully utilize the unique information present in each modality. Additionally, these methods mainly focus on pixel-level fusion of images, disregarding the challenge of modality misalignment. To address these issues, we propose a novel visible-infrared person re-identification method that explores modality enhancement and compensation spaces to extract more discriminative modality information. Furthermore, we introduce a modality mutual guidance strategy incorporating identity information mutual learning loss and modality-guided alignment loss, which can effectively leverage learned identity-related feature to guide alignment between visible and infrared modalities. Extensive experiments on public datasets demonstrate the significant superiority of our proposed method over existing state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call