Abstract

Existing person re-identification (Re-ID) methods usually rely heavily on large-scale thoroughly annotated training data. However, label noise is unavoidable due to inaccurate person detection results or annotation errors in real scenes. It is extremely challenging to learn a robust Re-ID model with label noise since each identity has very limited annotated training samples. To avoid fitting to the noisy labels, we propose to learn a prefatory model using a large learning rate at the early stage with a self-label refining strategy, in which the labels and network are jointly optimized. To further enhance the robustness, we introduce an online co-refining (CORE) framework with dynamic mutual learning, where networks and label predictions are online optimized collaboratively by distilling the knowledge from other peer networks. Moreover, it also reduces the negative impact of noisy labels using a favorable selective consistency strategy. CORE has two primary advantages: it is robust to different noise types and unknown noise ratios; it can be easily trained without much additional effort on the architecture design. Extensive experiments on Re-ID and image classification demonstrate that CORE outperforms its counterparts by a large margin under both practical and simulated noise settings. Notably, it also improves the state-of-the-art unsupervised Re-ID performance under standard settings. Code is available at https://github.com/mangye16/ReID-Label-Noise.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call