Abstract

Unsupervised person re-identification (ReID) aims to learn discriminative identity features in scenarios without a ground-truth. Fully unsupervised person ReID methods usually iterate between pseudo-labels prediction and representation learning and have achieved promising performance. However, these methods are usually hampered by noisy labels. To address this issue, we propose a reliability modeling and contrastive learning (RMCL) method, which aims to reduce the impact of noisy labels and enhance the robustness of the model to hard samples. On the basis of existing work, first, we define the concept of probabilistic stability and design a stability estimation scheme to improve pseudo-labels reliability modeling. Second, we explore the reliability–informativity function to redefine the weights of samples, which can be easily introduced to existing optimization methods. Finally, we expand the range of hard samples and design an identity hard contrastive loss, which is used to increase the robustness of the model to hard samples. Experiments on three large-scale person ReID datasets (Market-1501, DukeMTMC-reID and MSMT17) validate the effectiveness of RMCL, which surpasses state-of-the-art fully unsupervised methods and cross-domain methods. Furthermore, we explore the superiority of RMCL in a new “three2one” scenario.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.