Abstract
Learning deep neural networks from noisy labels is challenging, because high-capacity networks attempt to describe data even with noisy class labels. In this study, we propose a self-augmentation method without additional parameters, which handles noisy labeled data based on small-loss criteria. To this end, we use small-loss samples by introducing a noise-robust probabilistic model based on a Gaussian mixture model (GMM), in which small-loss samples follow class-conditional Gaussian distributions. With this sample augmentation using the GMM-based probabilistic model, we can effectively solve over-parameterization problems induced by label inconsistency in small-loss samples. We further enhance the quality of the small-loss samples using our data-adaptive selection strategy. Consequently, our method prevents networks from over-parameterization and enhances their generalization performance. Experimental results demonstrate that our method outperforms state-of-the-art methods for learning with noisy labels on several benchmark datasets. The proposed method produced a remarkable performance gap of up to 12% compared with the previous state-of-the-art methods on CIFAR dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.