Abstract

AbstractThe challenges and risks of deploying deep neural networks (DNNs) in the open-world are often overlooked and potentially result in severe outcomes. With our proposed informer approach, we leverage autoencoder-based outlier detectors with their sensitivity to epistemic uncertainty by ensembling multiple detectors each learning a different one-vs-rest setting. Our results clearly show informer’s superiority compared to DNN ensembles, kernel-based DNNs, and traditional multi-layer perceptrons (MLPs) in terms of robustness to outliers and dataset shift while maintaining a competitive classification performance. Finally, we show that informer can estimate the overall uncertainty within a prediction and, in contrast to any of the other baselines, break the uncertainty estimate down into aleatoric and epistemic uncertainty. This is an essential feature in many use cases, as the underlying reasons for the uncertainty are fundamentally different and can require different actions.KeywordsUncertainty estimationAleatoric uncertaintyEpistemic uncertaintyOpen world recognition

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call