Abstract
Unsupervised domain adaptation (UDA) in person re-identification (re-ID) is a challenging task, aiming to learn a model with labeled source data and unlabeled target data to recognize the same person in the target domain across different cameras. Recently, a lot of popular and promising methods based on clustering are proposed for this task and achieve a sizable progress. However, in those methods, without target labels, the clustering algorithms will inevitably produce noisy pseudo-labels. Overfitting on these noisy labels is severely harmful to the performance and generalization of models. In order to address the above issues, we propose a novel framework, Adaptive Deep Clustering (AdaDC), to reduce the negative impact of noisy pseudo-labels. On one hand, the proposed approach employs different clustering methods adaptively and alternately to fully exploit their complementary information and avoid overfitting noisy pseudo-labels. On the other hand, there is a progressive sample selection strategy for reducing noisy label ratio in pseudo-labels, which is achieved by integrating different clustering results. Experiments present that the proposed approach can achieve state-of-the-art performance compared to the other recent UDA person re-ID methods on widely-used datasets. Moreover, there are some other analysis experiments conducted for verifying the effectiveness of the proposed approach.
Accepted Version
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.