Abstract

The significant success of Deep Neural Networks (DNNs) relies on the availability of annotated large-scale datasets. However, it is time-consuming and expensive to obtain the available annotated datasets of huge size, which hinders the development of DNNs. In this paper, a novel two-stage framework is proposed for learning with noisy labels, called Two-Stage Sample selection and Semi-supervised learning Network (TSS-Net). It combines sample selection with semi-supervised learning. The first stage divides the noisy samples from the clean samples using cyclic training. The second stage uses the noisy samples as unlabeled data and the clean samples as labelled data for semi-supervised learning. Unlike previous approaches, TSS-Net does not require specifically designed robust loss functions and complex networks. It achieves decoupling of the two stages, which means that each stage can be replaced with a superior method to achieve better results, and this improves the inclusiveness of the network. Our experiments are conducted on several benchmark datasets in different settings. The experimental results demonstrate that TSS-Net outperforms many state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call