Abstract

Deep neural networks trained using standard cross-entropy loss often induce overfitting of noisy labels that degrades performance. Label correction and contrastive learning have been theoretically demonstrated to be mutually beneficial in noisy label learning. However, the fusion of these two modules in practice has not been researched adequately, and the influence of each module on performance improvement has not been clarified. In this paper, we propose an effective noisy label learning framework that combines layered label correction with mixup supervised contrastive learning, based on hierarchical clustering using first neighbor relations and mixup contrastive loss. In particular, the framework consists of two modules—Layered Label Correction (LLC) and Mixup Supervised Contrastive Loss (MSCL). The LLC module aggregates the first neighbor's soft labels via label propagation among multilayer partitions based on the hierarchical clustering of the feature representation. Its purpose is to capture the underlying structure of learning representations at different levels of granularity. The MSCL module incorporates pseudo-labels into mixup contrastive learning directly to learn class-related representations of all instances of each class. Ablation studies are conducted to evaluate the contribution of individual modules to the overall performance enhancement of the framework. Moreover, experiments on synthetic datasets comprising noise of various patterns and levels demonstrate that the proposed framework yields a superior classification accuracy in comparison with existing alternatives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call