Abstract

Person re-identification (Re-ID) has been widely used in intelligent surveillance systems, aiming at retrieving specific pedestrian images across different cameras. Although existing person Re-ID methods have achieved inspiring success, there are still limitations in practical monitoring system applications. To narrow the gap with the practical application, we introduce the cross-modality person Re-ID problem in the clothes-changing scene. Meanwhile, we construct the first Visible-Infrared Clothes-Changing (NEU-VICC) dataset, which contained 16632 RGB images and 8374 infrared images of 107 pedestrians. The critical challenge of the cross-modality person Re-ID problem in the clothes-changing scene lies in the vast modality discrepancy and the intra-class discrepancy caused by changing clothes. So, we propose a novel Semantic-Constraint Clothes-Changing Augmentation Network (SC3ANet) based on current cross-modality person Re-ID methods to solve this problem. Specifically, we design a semantic-constraint clothes-changing module that guides the model to learn clothes-irrelevant features by randomly changing pedestrians' clothes. In addition, we devise a dual-granularity constraint loss module to mitigate inter-modality and intra-class differences. Experiments on our NEU-VICC dataset show that the SC3ANet achieves the best results. The dataset and code are available at: https://github.com/VDT-2048/NEU-VICC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call