Abstract

Out-of-Distribution (OOD) detection is critical for preventing deep learning models from making incorrect predictions, especially in safety-critical applications such as medical diagnosis and autonomous driving. However, neural networks often suffer from overconfidence, making high confidence predictions for OOD data that are never seen during training and may be irrelevant to training data. Determining the reliability of the prediction is still a difficult and challenging task. To address this challenge, we propose a new method called Uncertainty-Estimation with Normalized Logits (UE-NL) for robust learning and OOD detection. The method has three main benefits: (1) Neural networks with UE-NL treat every In Distribution (ID) sample equally by predicting the uncertainty score of input data and the uncertainty is added into SoftMax function to adjust the learning strength of easy and hard samples during training phase, making the model learn robustly and accurately. (2) UE-NL enforces a constant vector norm on the logits to decouple the effect of the increasing output’s norm from optimization process, which causes the overconfidence issue to some extent. (3) UE-NL provides a new metric, the magnitude of uncertainty score, to detect OOD data. Experiments demonstrate that UE-NL outperforms existing methods on common OOD benchmarks and is more robust to noisy ID data that may be misjudged as OOD data by other methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.