Abstract

We are concerned with the efficiency of stochastic gradient estimation methods for large-scale nonlinear optimization in the presence of uncertainty. These methods aim to estimate an approximate gradient from a limited number of random input vector samples and corresponding objective function values. Ensemble methods usually employ Gaussian sampling to generate the input samples. It is known from the optimal design theory that the quality of sample-based approximations is affected by the distribution of the samples. We therefore evaluate six different sampling strategies to optimization of a high-dimensional analytical benchmark optimization problem, and, in a second example, to optimization of oil reservoir management strategies with and without geological uncertainty. The effectiveness of the sampling strategies is analyzed based on the quality of the estimated gradient, the final objective function value, the rate of the convergence, and the robustness of the gradient estimate. Based on the results, an improved version of the stochastic simplex approximate gradient method is proposed based on UE(s2) sampling designs for supersaturated cases that outperforms all alternative approaches. We additionally introduce two new strategies that outperform the UE(s2) designs previously suggested in the literature.

Highlights

  • A continuous increase over recent decades in computing power, accompanied by improvements in numerical algorithms, has led to increasing use of simulation models to obtain optimal operating strategies for complex systems

  • The presence of significant uncertainty, even after years of data gathering, motivates the optimization of the expected value of the objective function, an approach that is sometimes referred to as robust optimization [35]

  • The impact of alternative distributions was considered by Sarma and Chen [31] who investigated the impact of a quasi-random sampling method (Sobol sampling, [25]) that avoids clustering of samples on stochastic noise reaction (SNR) gradient estimates

Read more

Summary

Introduction

A continuous increase over recent decades in computing power, accompanied by improvements in numerical algorithms, has led to increasing use of simulation models to obtain optimal operating strategies for complex systems. The impact of alternative distributions was considered by Sarma and Chen [31] who investigated the impact of a quasi-random sampling method (Sobol sampling, [25]) that avoids clustering of samples on SNR gradient estimates They found Sobol sampling to lead to a faster rate of convergence relative to Gaussian sampling when applied to a deterministic reservoir optimization problem. We will address the question which sampling strategy for the supersaturated case leads to optimal performance of the approximate gradient estimation methods within large-scale nonlinear optimization problems under uncertainty. The sampling strategies are applied in conjunction with the StoSAG method first to the extended Rosenbrock optimization test function [9] and subsequently to a synthethic 3D reservoir model of realistic complexity (for both deterministic and robust cases) followed by a detailed analysis.

Random sampling
Quasi-Monte Carlo sampling
Stratified sampling
Optimal supersaturated designs
Analytical toy problem
Oil reservoir case
Deterministic optimization
Robust optimization
Sensitivity of results
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call