Abstract

Recently, unsupervised domain-adaptive person Re-identification methods have been extensively studied thanks to not requiring annotations, and have achieved excellent performance. Most of the existing methods aim to train the Re-ID model for learning a discriminative feature representation. However, they usually only consider training the model to learn a global feature of a pedestrian image, but neglecting the local feature, which restricts the further improvement of model performance. To address this problem, two local branches are added to the networks, aiming to allow the model to focus on local feature containing identity information. Furthermore, we propose a self-supervised consistency constraint to further improve the robustness of the model. Specifically, the self-supervised consistency constraint uses the basic data augmentation operations without other auxiliary networks, which can improve the performance of the model effectively. Then, a learnable memory matrix is designed to store the mapping vectors that maps person features into probability distributions. Finally, extensive experiments are conducted on multiple commonly used person Re-ID datasets to verify the effectiveness of the proposed generative adversarial networks fusing global and local features. Experimental results reveal that our method achieves comparable results to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call