Abstract

Learning with noisy labels represents a prevalent weakly supervised learning paradigm. Uncertain knowledge resulting from noisy labels poses significant challenges for knowledge analysis. Given the memorization effect observed in deep neural networks, training on instances with minimal loss holds promise for effectively handling noisy labels. “Co-teaching”, which is the state-of-the-art training method in this field, is characterized by the simultaneous training of two deep neural networks using instances with low loss. While this approach has demonstrated promising performance, its effectiveness heavily relies on the predictive capabilities of two neural networks. If these networks fail to provide reliable predictions, the overall learning performance may be unsatisfactory. In order to solve this problem and inspired by three-way decision, we propose a powerful learning paradigm named “Three-teaching”, which employs the “voting mechanism” to guarantee the prediction quality incrementally. In this approach, both neural networks make predictions for all the data. However, only the data that exhibits consistent prediction results and has a low loss is retained to feed into the third neural network for updating its parameters. The learning process will proceed by alternating these three neural networks’ roles. The experimental results obtained from benchmark datasets illustrate that “Three-teaching” surpasses numerous state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call