Abstract

Making good predictions of a physical system using a computer code requires the inputs to be carefully specified. Some of these inputs, called control variables, reproduce physical conditions, whereas other inputs, called parameters, are specific to the computer code and most often uncertain. The goal of statistical calibration consists in reducing their uncertainty with the help of a statistical model which links the code outputs with the field measurements. In a Bayesian setting, the posterior distribution of these parameters is typically sampled using Markov Chain Monte Carlo methods. However, they are impractical when the code runs are highly time-consuming. A way to circumvent this issue consists of replacing the computer code with a Gaussian process emulator, then sampling a surrogate posterior distribution based on it. Doing so, calibration is subject to an error which strongly depends on the numerical design of experiments used to fit the emulator. Under the assumption that there is no code discrepancy, we aim to reduce this error by constructing a sequential design by means of the expected improvement criterion. Numerical illustrations in several dimensions assess the efficiency of such sequential strategies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.