Abstract

Deep neural networks (DNNs) have achieved remarkable performance in various fields; however, a drop in accuracy is inevitable when training on data with noisy labels. In recent years, researchers on learning with noisy labels have focused on sample selection that uses loss values for each sample to divide the training data into clean and noisy samples. However, the performance of noisy label removal in a sample selection approach depends on the learning progress of the initial learning step. Furthermore, as the number of labeled samples used for learning a DNN decreases, the learning efficiency decreases. We propose a learning framework that improves the performance of the initial learning step of a DNN and the learning efficiency until convergence by introducing SimCLR as a pre-training method. The experimental results on the CIFAR-100 dataset containing noisy labels with the reproduced symmetric noise of various noise rates demonstrate the effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call