Abstract

Due to the lack of labeled data, it is usually difficult for an unsupervised person re-identification (re-ID) model to learn discriminative features. To address this issue, we propose a global-level and patch-level unsupervised feature learning framework that utilizes both global and local information to obtain more discriminative features. For global-level learning, we design a global similarity-based loss (GSL) to leverage the similarities between whole images. Along with a memory-based non-parametric classifier, the GSL pulls credible samples closer to help train a discriminative model. For patch-level learning, we use a patch generation module to produce different patches. Applying the patch-based discriminative feature learning loss and image-level feature learning loss, the patch branch in the network can learn better representative patch features. Combining the global-level learning with patch-level learning, we obtain a more distinguishable re-ID model. Experimental results obtained on Market-1501 and DukeMTMC-reID datasets validate that our method has great superiority and effectiveness in unsupervised person re-ID.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call