Abstract

Learning with Noisy Labels (LNL) methods have been widely studied in recent years, which aims to improve the performance of Deep Neural Networks (DNNs) when the training dataset contains incorrectly annotated labels. Popular existing LNL methods rely on semantic features extracted by the DNN to detect and mitigate label noise. However, these extracted features are often spurious and contain unstable correlations with the label across different environments (domains), which can occasionally lead to incorrect prediction and compromise the efficacy of LNL methods. To mitigate this insufficiency, we propose Invariant Feature based Label Correction (IFLC), which reduces spurious features and accurately utilizes the learned invariant features that contain stable correlation to correct label noise. To the best of our knowledge, this is the first attempt to mitigate the issue of spurious features for LNL methods. IFLC consists of two critical processes: The Label Disturbing (LD) process and the Representation Decorrelation (RD) process. The LD process aims to encourage DNN to attain stable performance across different environments, thus reducing the captured spurious features. The RD process strengthens independence between each dimension of the representation vector, thus enabling accurate utilization of the learned invariant features for label correction. We then utilize robust linear regression for the feature representation to conduct label correction. We evaluated the effectiveness of our proposed method and compared it with state-of-the-art (sota) LNL methods on four benchmark datasets, CIFAR-10, CIFAR-100, Animal-10N, and Clothing1M. The experimental results show that our proposed method achieved comparable or even better performance than the existing sota methods. The source codes are available at https://github.com/yangbo1973/IFLC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call