Abstract

Deep learning methods are at the forefront of leading state-of-the-art methods in a wide range of machine learning applications. In particular, convolutional neural networks (CNNs) attain topmost performance assuming a sufficiently large number of labeled training examples. Unfortunately, labeled data is artificially curated, and it requires human labor, which consequently makes it expensive and time-consuming. Moreover, there are no guarantees that the obtained labels are noise-free. In fact, the performance of CNNs is influenced by the level of noisy labels in the training dataset. Although the literature lacks attention to train learning methods with noisy labels, few semi-supervised learning methods mitigate this obstacle. In this paper, we propose a new teacher/student deep semi-supervised learning (TS-DSSL) method that employs self-training on noisy labels training dataset. We measure the efficiency of TS-DSSL on semi-supervised visual object classification tasks on the benchmark datasets CIFAR10 and MNIST. TS-DSSL achieves impressive results even in the presence of high-level noisy labels. It also sets a record on datasets with various levels of noisy labels created from the previous datasets with uniform and non-uniform noise distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call