Abstract

Abstract Finding an optimal steam injection strategy for a SAGD process is considered a major challenge due to the complex dynamics of the physical phenomena. Recently, reinforcement learning (RL) has been presented as alternative to conventional methods (e.g., adjoint-optimization, model predictive control) as an effective way to address the cited challenge. In general, RL represents a model-free strategy where an agent is trained to find the optimal policy - the action at every time step that will maximize cumulative long-term performance of a given process- only by continuous interactions with the environment (e.g., SAGD process). This environment is modeled as a Markov-Decision-Process (MDP) and a state must be defined to characterize it. During the interaction process, at each time step, the agent executes an action, receives a scalar reward (e.g., net present value) due to the action taken and observes the new state (e.g., pressure distribution of the reservoir) of the environment. This process continuous for a number of simulations or episodes until convergence is achieved. One approach to solve the RL problem is to parametrize the policy using well-known methods, e.g., linear functions, SVR, neural networks, etc. This approach is based on maximizing the performance of the process with respect to the parameters of the policy. Using the Monte Carlo algorithm, after every episode a long-term performance of the process is obtained and the parameters of the policy are updated using gradient-ascent methods. In this work policy gradient is used to find the steam injection policy that maximizes cumulative net present value of a SAGD process. The environment is represented by a reservoir simulation model inspired by northern Alberta reservoir and the policy is parametrized using a deep neural network. Results show that optimal steam injection can be characterized in two regions: 1) an increase or slight increase of steam injection rates, and 2) a sharp decrease until reaching the minimum value. Furthermore, the first region's objective appears to be more of pressure maintenance using high steam injection rates. In the second region, the objective is to collect more reward or achieve high values of daily net present value due to the reduction of steam injection while keeping high oil production values.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.