Abstract

Abstract With the emergence of large-scale datasets and deep learning systems, person re-identification(Re-ID) has made many significant breakthroughs. Meanwhile, Visible-Thermal person re-identification(V-T Re-ID) between visible and thermal images has also received ever-increasing attention. However, most of typical visible-visible person re-identification(V-V Re-ID) algorithms are difficult to be directly applied to the task of V-T Re-ID, due to the large cross-modality intra-class and inter-class variation. In this paper, we build an end-to-end dual-path spatial-structure-preserving common space network to transfer some V-V Re-ID methods to V-T Re-ID domain effectively. The framework mainly consists of two parts: a modility specific feature embedding network and a common feature space. Benefiting from the common space, our framework can abstract attentive common information by learning local feature representations for V-T Re-ID. We conduct extensive experiments on the publicly available RGB-IR re-ID benchmark datasets, SYSUMM01 and RegDB, for demonstration of the effectiveness of bridging the gap between V-V Re-ID and V-T Re-ID. Experimental results achieves the state-of-the-art performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.