Abstract

Semi-supervised learning attempts to use a large set of unlabeled data to increase the prediction accuracy of machine learning models when the amount of labeled data is limited. However, in realistic cases, unlabeled data may worsen performance because they contain out-of-distribution (OOD) data that differ from the labeled data. To address this issue, safe semi-supervised deep learning has recently been presented. This study suggests a new safe semi-supervised algorithm that uses an uncertainty-aware Bayesian neural network. Our proposed method, safe uncertainty-based consistency training (SafeUC), uses Bayesian uncertainty to minimize the harmful effects caused by unlabeled OOD examples. The proposed method improves the model’s generalization performance by regularizing the network for consistency against uncertain noise. Moreover, to avoid uncertain prediction results, the proposed method includes a practical inference tip based on a well-calibrated uncertainty. The effectiveness of the proposed method is demonstrated in the experimental results on CIFAR-10 and SVHN by showing that it achieved state-of-the-art performance for all semi-supervised learning tasks with OOD data presence rates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call