Abstract

The design and testing of supervised machine learning models combine two fundamental distributions: (1) the training data distribution and (2) the testing data distribution. Although these two distributions are identical and identifiable when the data set is infinite, they are imperfectly known when the data are finite (and possibly corrupted), and this uncertainty must be taken into account for robust uncertainty quantification (UQ). An important case is when the test distribution is coming from a modal or localized area of the finite sample distribution. We present a general decision theoretic bootstrapping solution to this problem: (1) partition the available data into a training subset and a UQ subset; (2) take m subsampled subsets of the training set and train m models; (3) partition the UQ set into n sorted subsets and take a random fraction of them to define <i>n</i> corresponding empirical distributions μ<sub>j</sub>; (4) consider the adversarial game where Player I selects a model i ∈ {1,.....,m}, Player II selects the UQ distribution μ<sub>j</sub>, and Player I receives a loss defined by evaluating the model <i>i</i> against data points sampled from μ<sub>j</sub>; (5) identify optimal mixed strategies (probability distributions over models and UQ distributions) for both players. These randomized optimal mixed strategies provide optimal model mixtures, and UQ estimates given the adversarial uncertainty of the training and testing distributions represented by the game. The proposed approach provides (1) some degree of robustness to in-sample distribution localization/concentration and (2) conditional probability distributions on the output space forming aleatory representations of the uncertainty on the output as a function of the input variable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call