Abstract

In this article, a semi-supervised classification algorithm that is based on weighted pseudo labeled data and mutual learning is proposed. The purpose of our method is to improve the classification performance of semi-supervised learning models and rectify the incorrect pseudo labels in the process of training. Specifically, the algorithm is built with a deep convolutional neural network and an ensemble learning model. First, output smearing is employed to construct different training sets and perform model initialization. The pseudo labels of unlabeled data are inferred by network predictions. Second, based on selection and weighting strategies for pseudo labeled data, pseudo labeled data with high confidence are selected and added to the real labeled training set. Accordingly, the model is retrained on the weighted pseudo labeled data. Last, a mutual learning strategy is applied to enhance the prediction consistency among classifiers. Furthermore, diversity fine tuning and mutual learning are performed alternately to determine the optimal balance between diversity and consistency, which consequently improves the accuracy of the pseudo label predictions. The experimental results on three benchmark datasets, namely, MNIST, CIFAR10 and SVHN, demonstrate that the proposed method effectively rectifies the incorrect pseudo labels. Notably, the method achieves the best performance compared with state-of-the-art semi-supervised classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call