Abstract

Visible-infrared person re-identification (VI-ReID) is challenging due to the large modality discrepancy between visible and infrared images. Existing methods mainly focus on learning modality-shared representations by embedding images from different modalities into a common feature space, in which some discriminative modality information is discarded. Different from these methods, in this paper, we propose a novel Modality-Specific Memory Network (MSMNet) to complete the missing modality information and aggregate visible and infrared modality features into a unified feature space for the VI-ReID task. The proposed model enjoys several merits. First, it can exploit the missing modality information to alleviate the modality discrepancy when only the single-modality input is provided. To the best of our knowledge, this is the first work to exploit the missing modality information completion and alleviate the modality discrepancy with the memory network. Second, to guide the learning process of the memory network, we design three effective learning strategies, including feature consistency, memory representativeness and structural alignment. By incorporating these learning strategies in a unified model, the memory network can be well learned to propagate identity-related information between modalities and boost the VI-ReID performance. Extensive experimental results on two standard benchmarks (SYSU-MM01 and RegDB) demonstrate that the proposed MSMNet performs favorably against state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.