Abstract

History matching aims to find a numerical reservoir model that can be used to predict the reservoir performance. An engineer and model calibration (data inversion) method are required to adjust various parameters/properties of the numerical model in order to match the reservoir production history. In this study, we develop deep neural networks within the reinforcement learning framework to achieve automated history matching that will reduce engineers’ efforts, human bias, automatically and intelligently explore the parameter space, and remove the need of large set of labeled training data. To that end, a fast-marching-based reservoir simulator is encapsulated as an environment for the proposed reinforcement learning. The deep neural-network-based learning agent interacts with the reservoir simulator within reinforcement learning framework to achieve the automated history matching. Reinforcement learning techniques, such as discrete Deep Q Network and continuous Deep Deterministic Policy Gradients, are used toth, used to train the learning agents. The continuous actions enable the Deep Deterministic Policy Gradients to explore more states at each iteration in a a learning episode; consequently, a better history matching is achieved using this algorithm as compared to Deep Q Network. For simplified dual-target composite reservoir models, the best history-matching performances of the discrete and continuous learning methods in terms of normalized root mean square errors are 0.0447 and 0.0038, respectively. Our study shows that continuous action space achieved by the deep deterministic policy gradient drastically outperforms deep Q network.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.