Abstract

One way to quantify the uncertainty in Bayesian inverse problems arising in the engineering domain is to generate samples from the posterior distribution using Markov chain Monte Carlo (MCMC) algorithms. The basic MCMC methods tend to explore the parameter space slowly, which makes them inefficient for practical problems. On the other hand, enhanced MCMC approaches, like Hamiltonian Monte Carlo (HMC), require the gradients from the physical problem simulator, which are often not available. In this case, a feasible option is to use the gradient approximations provided by the surrogate (proxy) models built on the simulator output. In this paper, we consider proxy-aided HMC employing the Gaussian process (kriging) emulator. We overview in detail the different aspects of kriging proxies, the underlying principles of the HMC sampler and its interaction with the proxy model. The proxy-aided HMC algorithm is thoroughly tested in different settings, and applied to three case studies—one toy problem, and two synthetic reservoir simulation models. We address the question of how the sampler performance is affected by the increase of the problem dimension, the use of the gradients in proxy training, the use of proxy-for-the-data and the different approaches to the design points selection. It turns out that applying the proxy model with HMC sampler may be beneficial for relatively small physical models, with around 20 unknown parameters. Such a sampler is shown to outperform both the basic Random Walk Metropolis algorithm, and the HMC algorithm fed by the exact simulator gradients.

Highlights

  • Hamiltonian Monte Carlo (HMC) samplers and proxy modelsOne of the common ways to tackle an inverse problem is finding a single model which is calibrated to reproduce the observation data

  • We studied the application of proxy models to aid the Markov chain Monte Carlo sampling for the Bayesian inverse problems

  • We described the underlying principles for building a kriging proxy model, and training the proxy model based on the target function values

Read more

Summary

Introduction

One of the common ways to tackle an inverse problem is finding a single model which is calibrated (history matched) to reproduce the observation data. A more advanced approach involves quantification of uncertainties (UQ), e.g. by generating a range of models consistent both with the prior information, and with the observed data, or, strictly speaking, sampling from the target posterior distribution This is usually done by employing a Markov chain Monte Carlo (MCMC) sampler. The papers cited usually deal with problems of small dimension, hardly exceeding 10 This limitation may stem either from the need to avoid unnecessary complexity in the numerical illustrations (in this case, the algorithms should be in principle applicable to higher dimensions), or it may indicate more fundamental challenges in applying the proxy models to aid MCMC (which seems to be the case at least for the procedure considered in our work). For a diagonal C, the objective function f (x) is a weighted sum of squares (1), which is usually minimised during history matching, and the name objective function

RWM and HMC samplers
Burn-in stage of the MCMC
Case studies
PunqS3 three-phase model
Single-phase simulation for a synthetic petroleum reservoir
Findings
Discussion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.