Abstract

Visible–infrared person re-identification (VI-ReID) is an important and practical task for full-time intelligent surveillance systems. Compared to visible person re-identification, it is more challenging due to the large cross-modal discrepancy. Existing VI-ReID methods suffer from heterogeneous structures and the different spectra of visible and infrared images. In this work, we propose the Spectrum-Insensitive Data Augmentation (SIDA) strategy, which effectively alleviates the disturbance in the visible and infrared spectra and forces the network to learn spectrum-irrelevant features. The network also compares samples with both global and local features. We devise a Feature Relation Reasoning (FRR) module to learn discriminative fine-grained representations according to the graph reasoning principle. Compared to the most commonly used uniform partition, our FRR better adopts to the case of VI-ReID, in which human bodies are difficult to align. Furthermore, we design the dual center loss for learning the global feature in order to maintain the intra-modality relations, while learning the cross-modal similarities. Our method achieves better convergence in training. Extensive experiments demonstrate that our method achieves state-of-the-art performance on two visible–infrared cross-modal Re-ID datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call