Abstract

Pseudo-label-based semi-supervised learning (SSL) recently has achieved significant success in unlabeled data utilization. Recent success on pseudo-label-based SSL methods crucially hinges on thresholded pseudo-labeling and consistency regularization for the unlabeled data. However, most of the existing methods do not measure and incorporate the uncertainties due to the noisy pseudo-labels or out-of-distribution unlabeled samples. Therefore, the model’s discernment becomes noisier in real-life applications that involve a substantial amount of out-of-distribution unannotated data. This leads to slow convergence in the training process and poor generalization performance. Inspired by the recent developments in SSL, our goal in this paper is to propose a novel unsupervised uncertainty-aware objective and threshold-mediated pseudo-labeling scheme that rely on uncertainty quantification from aleatoric and epistemic sources. By incorporating recent techniques in SSL, our proposed uncertainty-aware framework can mitigate the issue of confirmation bias and the impact of noisy pseudo-labels, resulting in improved training efficiency and enhanced generalization performance within the SSL domain. Despite its simplicity and computational efficiency, our approach demonstrates improved performance compared to state-of-the-art SSL methods on challenging datasets, such as CIFAR-100 and the real-world dataset Semi-iNat.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call