Abstract

• Training inference uncertainty estimator for selective classification • Virtual generation of pseudo training data with mixup in the latent feature space • Hypothesis of a correlation between the mixup ratio and the uncertainty • Self-annotating the uncertainty score of generated samples using the mixup ratio • Improvements in the uncertainty estimation for selective classification benchmarks We introduce Mixup Gamblers+, a method for building a deep learning classifier capable of self-evaluating the reliability of inference results. In the proposed method, samples with high uncertainty are generated virtually through data interpolation in the feature space embedded by the deep learning model, and the classifier is trained to detect them. Moreover, we introduce metric learning for feature representation learning to correlate the distance in the latent feature space with the similarity of the samples. This enables us to estimate the uncertainty of the pseudo data based on the distance in the latent feature space and enhances the training of our proposed method. The use of data interpolation in the latent feature space makes the proposed method a general-purpose method for learning uncertainty estimation models for inference, independent of the dataset and problem settings. The proposed method improves the accuracy of the inference and achieves state-of-the-art inferential uncertainty estimation results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call