Abstract
Summary For highly nonlinear problems, the objective function f(x) may have multiple local optima and it is desired to locate all of them. Analytical or adjoint-based derivatives may not be available for most real optimization problems, especially, when responses of a system are predicted by numerical simulations. The distributed-Gauss-Newton (DGN) optimization method performs quite efficiently and robustly for history-matching problems with multiple best matches. However, this method is not applicable for generic optimization problems, e.g., life-cycle production optimization or well location optimization. In this paper, we generalized the distribution techniques of the DGN optimization method and developed a new distributed quasi-Newton (DQN) optimization method that is applicable to generic optimization problems. It can handle generalized objective functions F(x,y(x))=f(x) with both explicit variables x and implicit variables, i.e., simulated responses, y(x). The partial derivatives of F(x,y) with respect to both x and y can be computed analytically, whereas the partial derivatives of y(x) with respect to x (sensitivity matrix) is estimated by applying the same efficient information sharing mechanism implemented in the DGN optimization method. An ensemble of quasi-Newton optimization tasks is distributed among multiple high-performance-computing (HPC) cluster nodes. The simulation results generated from one optimization task are shared with others by updating a common set of training data points, which records simulated responses of all simulation jobs. The sensitivity matrix at the current best solution of each optimization task is approximated by either the linear-interpolation (LI) method or the support-vector-regression (SVR) method, using some or all training data points. The gradient of the objective function is then analytically computed using its partial derivatives with respect to x and y and the estimated sensitivities of y with respect to x. The Hessian is updated using the quasi-Newton formulation. A new search point for each distributed optimization task is generated by solving a quasi-Newton trust-region subproblem for the next iteration. The proposed DQN method is first validated on a synthetic history matching problem and its performance is found to be comparable with the DGN optimizer. Then, the DQN method is tested on different optimization problems. For all test problems, the DQN method can find multiple optima of the objective function with reasonably small numbers of iterations (30 to 50). Compared to sequential model-based derivative-free optimization methods, the DQN method can reduce the computational cost, in terms of the number of simulations required for convergence, by a factor of 3 to 10.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.