Abstract

In recent times, deep learning models have achieved state-of-the-art performances in image classification. However, the classification and generalization ability of most achievements have highly relied on the availability of large-scale accurate labeled training data, which are time-consuming, laborious, and expensive to collect. Moreover, in the generalizing process of deep learning models, the noisy labels are the challenges for multi-category classification. Hence, it is essential to explore effective methods that can efficiently and correctly train deep models under label noise to conduct multi-category classification. This paper proposes using robust binary loss functions to train deep models under label noise to address this problem. Specifically, we suggest handling the K-class classification task by using K binary classifiers, which can be completed by a joint adoption of multi-category large margin classification approaches, e.g., Pairwise-Comparison (PC) or One-versus-All (OVA). We also theoretically demonstrate that our method is inherently tolerant to label noise by using symmetric binary loss functions in multi-category classification tasks. Moreover, we designed a truncated CCE loss to combine with the proposed losses to improve the learning ability. Finally, we test our method on three different benchmark datasets with different types of label noise. The experimental results have clearly confirmed the effectiveness of our method, which can reduce the negative effect of noisy labels and improve the generalization ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call