Abstract

We propose a smooth formulation of multiple-point statistics that enables us to solve inverse problems using gradient-based optimization techniques. We introduce a differentiable function that quantifies the mismatch between multiple-point statistics of a training image and of a given model. We show that, by minimizing this function, any continuous image can be gradually transformed into an image that honors the multiple-point statistics of the discrete training image. The solution to an inverse problem is then found by minimizing the sum of two mismatches: the mismatch with data and the mismatch with multiple-point statistics. As a result, in the framework of the Bayesian approach, such a solution belongs to a high posterior region. The methodology, while applicable to any inverse problem with a training-image-based prior, is especially beneficial for problems which require expensive forward simulations, as, for instance, history matching. We demonstrate the applicability of the method on a two-dimensional history matching problem. Starting from different initial models we obtain an ensemble of solutions fitting the data and prior information defined by the training image. At the end we propose a closed form expression for calculating the prior probabilities using the theory of multinomial distributions, that allows us to rank the history-matched models in accordance with their relative posterior probabilities.

Highlights

  • History matching is a task of inferring knowledge about subsurface models of oil reservoirs from production data

  • We propose a smooth formulation of the inverse problem with discrete-facies prior defined by a multiple-point statistics model

  • Aiming at minimizing the number of forward simulations we suggest an alternative approach, which is based on a smooth formulation of multiple-point statistics

Read more

Summary

Introduction

History matching is a task of inferring knowledge about subsurface models of oil reservoirs from production data. We propose a smooth formulation of the inverse problem with discrete-facies prior defined by a multiple-point statistics model This allows us to use gradient-based optimization methods to search for feasible models. Our strategy for exploring the a posteriori PDF, which is especially suitable for inverse problems with expensive forward simulation (e.g. history matching), is to obtain a set of models that feature high posterior values, and rank the solutions afterwards in accordance with their relative posterior probabilities. Lange et al (2012) solve a combinatorial optimization problem, perturbing the model in a discrete manner until it explains both data and a priori information This requires many forward simulations and can be prohibitive for the history matching problem. Combination of the proposed measure with the data misfit allows us to search a solution to an inverse problem with training-image-based prior by minimizing a single differentiable objective function

Methodology
Inverse Problems with Training Image-Defined Prior
The Smooth Formulation of Multiple-Point Statistics
Relation of the Dissimilarity Measure to Prior Probability
Generating Near-Maximum A Priori Models
Solving Inverse Problems
History Matching Example
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call