Abstract

Learning deep neural network (DNN) classifier with noisy labels is a challenging task because the DNN can easily over-fit on these noisy labels due to its high capability. In this paper, we present a very simple but effective training paradigm called P-DIFF+, which can train DNN classifiers but obviously alleviate the adverse impact of noisy labels. Our proposed probability difference distribution implicitly reflects the probability of a training sample to be clean, then this probability is employed to re-weight the corresponding sample during the training process. Moreover, Noisy Negative Learning(NNL) loss can be further employed to re-weight samples. P-DIFF+ can achieve good performance even without prior-knowledge on the noise rate of training samples. Experiments on benchmark datasets demonstrate that P-DIFF+ is superior to the state-of-the-art sample selection methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call