Abstract

In general, deep neural network is vulnerable to noisy labels also known as erroneous labels. As a main solution to mitigate this problem, sample selection techniques have been actively studied. However, if the labels are dominantly corrupted by some classes (these noisy samples are called dominant noisy labeled samples), the network also learns dominant noisy labeled samples rapidly via content-aware optimization. This can cause memorization (reduce generalization) in the deep neural network. In this study, we propose a compelling criteria to penalize dominant-noisy-labeled samples intensively through class-wise penalty labels. By averaging prediction confidences for the each observed label, we obtain suitable penalty labels that have high values if the labels are largely corrupted by some classes. Additionally, to enhance the accuracy of penalty labels, temporal ensembling and weight are exploited. Experiments were performed using benchmarks (CIFAR-10, CIFAR-100, Tiny-ImageNet) and real-world datasets (ANIMAL-10 N, Clothing1M) to evaluate the proposed criteria in various scenarios with different noise rates. Using the proposed sample selection, the learning process of the network becomes significantly robust to noisy labels compared to existing methods in several noise types. Moreover, the proposed criteria can be easily combined with the algorithms of loss correction and hybrid categories through a simple modification to improve learning performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call