Abstract

Noisy label learning is an important process that facilitates the collection of noisy label data for training accurate deep neural networks. The latest label correction methods are effective approaches that focus on identifying label errors in datasets for training. However, existing approaches to identify noise generally rely on model predictions that are not scalable and are error-prone. In this study, a new noisy label learning framework is proposed by leveraging supervised contrastive learning for enhanced representation and improved label correction. Specifically, the proposed framework consists of a class-balanced prototype queue, a prototype-based label correction algorithm, and a supervised representation learning module. The proposed framework produces closely aligned representations and model predictions for instances from the same classes and facilitates label correction by aggregating noisy and prototype labels. Furthermore, a theoretical analysis of the proposed framework is provided from the perspective of an expectation–maximization (EM) algorithm. To demonstrate the efficacy of the proposed framework, experiments were performed on synthetic datasets with various noise patterns and levels. The experimental results demonstrate that the proposed framework achieves superior classification accuracy compared with other label correction frameworks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call