Abstract

Multi-task classification improves generalization performance via exploiting the correlations between tasks. However, most multi-task learning methods fail to recognize and filter noisy labels for the classification problems with label noises. To address this issue, this paper proposes a novel multi-task label noise learning method based on loss correction, called MTLNL. MTLNL introduces the class-wise denoising (CWD) method for loss decomposition and centroid estimation of the loss function in multi-task learning, and eliminates the impact of label noise by using label flipping rate. It also extends to the multi-task positive-unlabeled (PU) learning domain, which offers better flexibility and generalization performance. Moreover, Nesterov’s method is applied to accelerate the solution of the model. MTLNL is compared with other algorithms on five benchmark datasets, five image datasets, and a multi-task PU dataset to demonstrate its effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call