Abstract

AbstractIn security-critical applications, it is essential to know how confident the model is in its predictions. Many uncertainty estimation methods have been proposed recently, and these methods are reliable when the training data do not contain labeling errors. However, we find that the quality of these uncertainty estimation methods decreases dramatically when noisy labels are present in the training data. In some datasets, the uncertainty estimates would become completely absurd, even though these labeling noises barely affect the test accuracy. We further analyze the impact of existing label noise handling methods on the reliability of uncertainty estimates, although most of these methods focus only on improving the accuracy of the models. We identify that the data cleaning-based approach can alleviate the influence of label noise on uncertainty estimates to some extent, but there are still some drawbacks. Finally, we propose a robust uncertainty estimation method under label noise. Compared with other algorithms, our approach achieves a more reliable uncertainty estimates in the presence of noisy labels, especially when there are large-scale labeling errors in the training data.KeywordsUncertainty estimationNoisy labelOut-of-distribution dataMis-classification detection

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call