Abstract
Noisy label learning is an important process that facilitates the collection of noisy label data for training accurate deep neural networks. The latest label correction methods are effective approaches that focus on identifying label errors in datasets for training. However, existing approaches to identify noise generally rely on model predictions that are not scalable and are error-prone. In this study, a new noisy label learning framework is proposed by leveraging supervised contrastive learning for enhanced representation and improved label correction. Specifically, the proposed framework consists of a class-balanced prototype queue, a prototype-based label correction algorithm, and a supervised representation learning module. The proposed framework produces closely aligned representations and model predictions for instances from the same classes and facilitates label correction by aggregating noisy and prototype labels. Furthermore, a theoretical analysis of the proposed framework is provided from the perspective of an expectation–maximization (EM) algorithm. To demonstrate the efficacy of the proposed framework, experiments were performed on synthetic datasets with various noise patterns and levels. The experimental results demonstrate that the proposed framework achieves superior classification accuracy compared with other label correction frameworks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.