Abstract

Uncertainty quantification is essential in preventing inaccurate predictions of neural networks. A vanilla neural network for regression does not intrinsically provide explicit information about prediction uncertainty. To quantify the prediction uncertainty for regression problems, we can build an alternative prediction model specialized for uncertainty quantification. However, this requires the use of training data, which are inaccessible in many real-world situations. To address such situations, this study presents a surrogate approach to quantify the prediction uncertainty of a regression network without using training data. A regression network tends to have high prediction uncertainty when its output is sensitive to its input. Based on this intuition, we quantify the sensitivity and use it as a surrogate measure of the prediction uncertainty. To do so, we introduce four surrogate measures that capture the sensitivity in different ways: Input perturbation, Gradient norm, MC-dropout, and Knowledge distillation. For a query instance, each surrogate measure can be calculated by using the regression network only to estimate the prediction uncertainty. We demonstrate the respective effectiveness of the proposed surrogate measures on nine regression datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call