Abstract

Most existing methods that cope with noisy labels usually assume that the classwise data distributions are well balanced. They are difficult to deal with the practical scenarios where training samples have imbalanced distributions, since they are not able to differentiate noisy samples from tail classes' clean samples. This article makes an early effort to tackle the image classification task in which the provided labels are noisy and have a long-tailed distribution. To deal with this problem, we propose a new learning paradigm which can screen out noisy samples by matching between inferences on weak and strong data augmentations. A leave-noise-out regularization (LNOR) is further introduced to eliminate the effect of the recognized noisy samples. Besides, we propose a prediction penalty based on the online classwise confidence levels to avoid the bias toward easy classes which are dominated by head classes. Extensive experiments on five datasets including CIFAR-10, CIFAR-100, MNIST, FashionMNIST, and Clothing1M demonstrate that the proposed method outperforms the existing algorithms for learning with long-tailed distribution and label noise.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call