Abstract

Abstract Although it is possible to apply traditional optimization algorithms to determine the Pareto front of a multi-objective optimization problem, the computational cost is extremely high, when the objective function evaluation requires solving a complex reservoir simulation problem and optimization cannot benefit from adjoint-based gradients. This paper proposes a novel workflow to solve bi-objective optimization problems using the distributed quasi-Newton (DQN) method, which is a well-parallelized and derivative-free optimization (DFO) method. Numerical tests confirm that the DQN method performs efficiently and robustly. The efficiency of the DQN optimizer stems from a distributed computing mechanism which effectively shares the available information discovered in prior iterations. Rather than performing multiple quasi-Newton optimization tasks in isolation, simulation results are shared among distinct DQN optimization tasks or threads. In this paper, the DQN method is applied to the optimization of a weighted average of two objectives, using different weighting factors for different optimization threads. In each iteration, the DQN optimizer generates an ensemble of search points (or simulation cases) in parallel and a set of non-dominated points is updated accordingly. Different DQN optimization threads, which use the same set of simulation results but different weighting factors in their objective functions, converge to different optima of the weighted average objective function. The non-dominated points found in the last iteration form a set of Pareto optimal solutions. Robustness as well as efficiency of the DQN optimizer originates from reliance on a large, shared set of intermediate search points. On the one hand, this set of searching points is (much) smaller than the combined sets needed if all optimizations with different weighting factors would be executed separately; on the other hand, the size of this set produces a high fault tolerance. Even if some simulations fail at a given iteration, DQN’s distributed-parallel information-sharing protocol is designed and implemented such that the optimization process can still proceed to the next iteration. The proposed DQN optimization method is first validated on synthetic examples with analytical objective functions. Then, it is tested on well location optimization problems, by maximizing the oil production and minimizing the water production. Furthermore, the proposed method is benchmarked against a bi-objective implementation of the MADS (Mesh Adaptive Direct Search) method, and the numerical results reinforce the auspicious computational attributes of DQN observed for the test problems. To the best of our knowledge, this is the first time that a well-parallelized and derivative-free DQN optimization method has been developed and tested on bi-objective optimization problems. The methodology proposed can help improve efficiency and robustness in solving complicated bi-objective optimization problems by taking advantage of model-based search optimization algorithms with an effective information-sharing mechanism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call