Abstract

Waterflooding optimization in closed-loop management of the oil reservoirs is always considered as a challenging issue due to the complicated and unpredicted dynamics of the process. The main goal in waterflooding is to adjust the manipulated variables such that the total oil production or a defined objective function, which has a strong correlation with the gained financial profit, is maximized. Fortunately, due to the recent progresses in the computational tools and also expansion of the calculating facilities, utilization of non-conventional optimization methods is feasible to achieve the desired goals. In this paper, waterflooding optimization problem has been defined and formulated in the framework of Reinforcement Learning (RL) methodology, which is known as a derivative-free and also model-free optimization approach. This technique prevents from the challenges corresponding with the complex gradient calculations for handling the objective functions. So, availability of explicit dynamic models of the reservoir for gradient computations is not mandatory to apply the proposed method. The developed algorithm provides the facility to achieve the desired operational targets, by appropriately defining the learning problem and the necessary variables. The fundamental learning elements such as actions, states, and rewards have been delineated both in discrete and continuous domain. The proposed methodology has been implemented and assessed on the Egg-model which is a popular and well-known reservoir case study. Different configurations for active injection and production wells have been taken into account to simulate Single-Input-Multi-Output (SIMO) as well as Multi-Input-Multi-Output (MIMO) optimization scenarios. The results demonstrate that the “agent” is able to gradually, but successfully learn the most appropriate sequence of actions tailored for each practical scenario. Consequently, the manipulated variables (actions) are set optimally to satisfy the defined production objectives which are generally dictated by the management level or even contractual obligations. Moreover, it has been shown that by properly adjustment of the rewarding policies in the learning process, diverse forms of multi-objective optimization problems can be formulated, analyzed and solved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call