Abstract
In this work, we provide a more consistent alternative for performing value of information (VOI) analyses to address sequential decision problems in reservoir management and generate insights on the process of reservoir decision-making. These sequential decision problems are often solved and modeled as stochastic dynamic programs, but once the state space becomes large and complex, traditional techniques, such as policy iteration and backward induction, quickly become computationally demanding and intractable. To resolve these issues and utilize fewer computational resources, we instead make use of a viable alternative called approximate dynamic programming (ADP), which is a powerful solution technique that can handle complex, large-scale problems and discover a near-optimal solution for intractable sequential decision making. We compare and test the performance of several machine learning techniques that lie within the domain of ADP to determine the optimal time for beginning a polymer flooding process within a reservoir development plan. The approximate dynamic approach utilized here takes into account both the effect of the information obtained before a decision is made and the effect of the information that might be obtained to support future decisions while significantly improving both the timing and the value of the decision, thereby leading to a significant increase in economic performance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.