Abstract

Summary The goal of field-development optimization is maximizing the expected value of an objective function, e.g., net present value for a producing oil field or amount of CO2 stored in a subsurface formation, over an ensemble of models that describe the uncertainty range. A single evaluation of the objective function requires solving a system of partial differential equations, which can be computationally costly. Hence, it is most desirable for an optimization algorithm to reduce the number of objective-function evaluations while delivering high convergence rate. Here, we develop a quasi-Newton method that builds on approximate evaluations of objective-function gradients and takes more effective iterative steps using a trust-region approach compared to line search. We implement three gradient formulations: ensemble optimization (EnOpt) gradient, and two variants of the stochastic simplex approximate gradient (StoSAG), all computed using perturbations around the point of interest. We modify the formulations to enable exploiting the objective-function structure. Instead of returning a single value for the gradient, the reformulation breaks up the objective function into its sub-components and returns a set of sub-gradients. We then can utilize our prior problem-specific knowledge through passing a ‘weight’ matrix to act on the sub-gradients. Two quasi-Newton updating algorithms are implemented: Broyden–Fletcher–Goldfarb–Shanno and the symmetric rank 1. We first evaluate the variants of our method on test challenging functions (e.g., stochastic variants of Rosenbrock and Chebyquad). Then, we present an application of well-control optimization for a realistic synthetic case. Our results confirm that StoSAG gradients are significantly more effective than EnOpt gradients for accelerating convergence. An important challenge to stochastic gradients is determining a-priori the adequate number of perturbations. We report that the optimal number of perturbations depends on both the number of decision variables and the size of uncertainty ensemble and provide practical guidelines for its selection. We show on the test functions that imposing our prior knowledge on the problem structure can improve the gradient quality and significantly accelerate convergence. In many instances, the quasi-Newton algorithms deliver superior performance compared to the steepest-descent algorithm, especially during the early iterations. Given the computational cost involved in typical applications, rapid and noteworthy improvements at early iterations is greatly desirable for accelerated project delivery. Furthermore, our method is robust, exploits parallel processing, and can be readily applied in a generic fashion for a variety of problems where the true gradient is difficult to compute or simply not available.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.