Abstract

We propose a novel framework for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks, including deep neural network (DNN) regression and neural operator learning (DeepONet). Specifically, we incorporate the bottleneck by a confidence-aware encoder, which encodes inputs into latent representations according to the confidence of the input data belonging to the region where training data is located, and utilize a Gaussian decoder to predict means and variances of outputs conditional on representation variables. Furthermore, we propose a data augmentation based information bottleneck objective which can enhance the quantification quality of the extrapolation uncertainty, and the encoder and decoder can be both trained by minimizing a tractable variational bound of the objective. In comparison to uncertainty quantification (UQ) methods for scientific learning tasks that rely on Bayesian neural networks with Hamiltonian Monte Carlo posterior estimators, the model we propose is computationally efficient, particularly when dealing with large-scale datasets. The effectiveness of the IB-UQ model has been demonstrated through several representative examples, such as regression for discontinuous functions, learning nonlinear operators for partial differential equations, and a large-scale climate map model. The experimental results indicate that the IB-UQ model can handle noisy data, generate robust predictions, and provide confident uncertainty evaluation for out-of-distribution data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call