Abstract

Person re-identification (Re-ID) has been widely studied by learning a discriminative feature representation with a set of well-annotated training data. Existing models usually assume that all the training samples are correctly annotated. However, label noise is unavoidable due to false annotations in large-scale industrial applications. Different from the label noise problem in image classification with abundant samples, the person Re-ID task with label noise usually has very limited annotated samples for each identity. In this paper, we propose a robust deep model, namely PurifyNet, to address this issue. PurifyNet is featured in two aspects: 1) it jointly refines the annotated labels and optimizes the neural networks by progressively adjusting the predicted logits, which reuses the wrong labels rather than simply filtering them; 2) it can simultaneously reduce the negative impact of noisy labels and pay more attention to hard samples with correct labels by developing a hard-aware instance re-weighting strategy. With limited annotated samples for each identity, we demonstrate that hard sample mining is crucial for label corrupted Re-ID task, while it is usually ignored in existing robust deep learning methods. Extensive experiments on three datasets demonstrate the robustness of PurifyNet over the competing methods under various settings. Meanwhile, we show that it consistently improves the unsupervised/video-based Re-ID methods. Code is available at: https://github.com/mangye16/ReID-Label-Noise .

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.