Abstract

Ensemble-based stochastic gradient methods, such as the ensemble optimiza-tion method (EnOpt), the simplex gradient method (SG), and the stochastic simplex approximate gradient method (StoSAG), approximate the gradient of an objective function using an ensemble of perturbed control vectors. These methods are increas-ingly used in solving reservoir optimization problems because they are not only easy to parallelize and couple with any simulator, but also computationally more efficient than the conventional finite-difference method for gradient calculations. In this work, we show that EnOpt may fail to achieve sufficient improvement of the objective function when the differences between the objective function values of perturbed control vari-ables and their ensemble mean are large. On the basis of the comparison of EnOpt and SG, we propose a hybrid gradient of EnOpt and SG to save the computational cost of SG. We also suggest practical ways to reduce the computational cost of EnOpt and StoSAG by approximating the objective function values of unperturbed control varia-bles using the values of perturbed ones. We first demonstrate the performance of our improved ensemble schemes using a benchmark problem. Results show that the pro-posed gradients saved about 30–50% of the computational cost of the same optimiza-tion by using EnOpt, SG, and StoSAG. As a real application, we consider pressure management in carbon storage reservoirs, for which brine extraction wells need to be optimally placed to reduce reservoir pressure buildup while maximizing the net present value. Results show that our improved schemes reduce the computational cost signifi-cantly.

Highlights

  • Since the ensemble Kalman filter was first introduced into the petroleum engineering (Lorentzen et al, 2001; Nævdal et al, 2002; Kim et al, 2018), many ensemble-based history matching methods have gained popularity because they are reduced rank methods and are relatively easy to implement, parallelize, and couple with any numerical simulator. Chen et al (2009) first systematically applied the ensemble concept to optimization of well controlEfficient Ensemble-Based Stochastic Gradient Methods variables to maximize the net present value in oil and gas fields

  • We propose practical ways to reduce the computational cost of ensemble optimization (EnOpt), simplex gradient (SG), and stochastic simplex approximate gradient (StoSAG) by approximating the objective function values of unperturbed control variables using those obtained for the perturbed ones

  • The objective of this optimization problem is to find the optimal location of a brine extraction well that maximizes the mean of the J-function values given in Equation (22) of the 20 geological models

Read more

Summary

Introduction

Efficient Ensemble-Based Stochastic Gradient Methods variables (e.g., well rates and bottom-hole pressures) to maximize the net present value in oil and gas fields. They named their scheme the ensemble optimization (EnOpt) method. Similar to the ensemble-based data assimilation methods, EnOpt can be parallelized and coupled with any simulator Another strength of EnOpt is that EnOpt finds an optimal solution under geological uncertainty by maximizing the expectation of the objective function values of multiple models representing model uncertainties, whereas the conventional optimization methods typically require solving each model separately (Chen et al, 2009; van Essen et al, 2009) and optimization under uncertainty is non-trivial (Sun et al, 2013; Zhang et al, 2016). EnOpt includes a specific way to compute the gradient, which is needed by all gradient-based optimization algorithms

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call