Abstract

Standard deep learning methods, such as Ensemble Models, Bayesian Neural Networks, and Quantile Regression Models provide estimates of prediction uncertainties for data-driven deep learning models. However, they can be limited in their applications due to their heavy memory, inference cost, and ability to properly capture out-of-distribution uncertainties. Additionally, some of these models require post-training calibration that limits their ability to be used for continuous learning applications. In this paper, we present a new approach to provide prediction with calibrated uncertainties that includes out-of-distribution contributions and compare it to standard methods on the Fermi National Accelerator Laboratory (FNAL) Booster accelerator complex.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call