Abstract

Visible-infrared person re-identification (Re-ID) has received increasing research attention for its great practical value in night-time surveillance scenarios. Due to the large variations in person pose, viewpoint, and occlusion in the same modality, as well as the domain gap brought by heterogeneous modality, this hybrid modality person matching task is quite challenging. Different from the metric learning methods for visible person re-ID, which only pose similarity constraints on class level, an efficient metric learning approach for visible-infrared person Re-ID should take both the class-level and modality-level similarity constraints into full consideration to learn sufficiently discriminative and robust features. In this article, the hybrid modality is divided into two types, within modality and cross modality. We first fully explore the variations that hinder the ranking results of visible-infrared person re-ID and roughly summarize them into three types: within-modality variation, cross-modality modality-related variation, and cross-modality modality-unrelated variation. Then, we propose a comprehensive metric learning framework based on four kinds of paired-based similarity constraints to address all the variations within and cross modality. This framework focuses on both class-level and modality-level similarity relationships between person images. Furthermore, we demonstrate the compatibility of our framework with any paired-based loss functions by giving detailed implementation of combing it with triplet loss and contrastive loss separately. Finally, extensive experiments of our approach on SYSU-MM01 and RegDB demonstrate the effectiveness and superiority of our proposed metric learning framework for visible-infrared person Re-ID.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.