Abstract

Due to the memorization effect of Deep Neural Networks, training with noisy labels typically results in performance degradation. Existing methods have devoted numerous efforts to training a robust model. However, rare literature pays attention to deep metric learning in the presence of label noise due to the challenge of discriminating between noisy examples and informative (hard) ones. To this end, we propose a simple yet effective approach Co-mining to build a robust and discriminative feature space. Specifically, more confident yet fewer instances are selected to guarantee the label quality and make up a relatively reliable gallery for exploring under-exploited patterns. Considering that some unexpected noisy instances are inevitably registered as clean ones as training progresses, we make a trade-off between cleanliness and informativeness. A triplet loss, pulling positive pairs closer and pushing negative pairs apart, is imposed on semi-hard samples rather than the hardest counterparts. Lastly, our method employs a threshold to distinguish out-of-distribution noisy images and discards them directly. We also use a re-ranking strategy for in-distribution noisy data to correct corrupted labels based on their consistently high predicted probabilities. Comprehensive experiments on various synthetic and real-world benchmarks demonstrate the superiority of our Co-mining. Code is available at https://github.com/NUST-Machine-Intelligence-Laboratory/Co-mining.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call