Abstract

Existing deep learning methods for fine-grained visual recognition often rely on large-scale, well-annotated training data. Obtaining fine-grained annotations in the wild typically requires concentration and expertise, such as fine category annotation for species recognition, instance annotation for person re-identification (re-id) and dense annotation for segmentation, which inevitably leads to label noise. This paper aims to tackle label noise in deep model training for fine-grained visual recognition. We propose a Neighbor-Attention Label Correction (NALC) model to correct labels during the training stage. NALC samples a training batch and a validation batch from the training set. It hence leverages a meta-learning framework to correct labels in the training batch based on the validation batch. To enhance the optimization efficiency, we introduce a novel nested optimization algorithm for the meta-learning framework. The proposed training procedure consistently improves label accuracy in the training batch, consequently enhancing the learned image representation. Experimental results demonstrate that our method significantly increases label accuracy from 70% to over 98% and outperforms recent approaches by up to 13.4% in mean Average Precision (mAP) on various fine-grained image retrieval (FGIR) tasks, including instance retrieval on CUB200 and person re-id on Market1501. We also demonstrate the efficacy of NALC on noisy semantic segmentation datasets generated from Cityscapes, where it achieves a significant 7.8% improvement in mIOU score. NALC also exhibits robustness to different types of noise, including simulated noise such as Asymmetric, Pair-Flip, and Pattern noise, as well as practical noisy labels generated by tracklets and clustering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call