The behavior of oil reservoirs, characterized by geophysical, geochemical, and geological properties, can be understood through the simulation of computational models, involving the construction of meshes of finite volume elements with equations derived from fundamental principles. However, the execution of multiple simulations for activities such as optimization and uncertainty assessment results in substantial computational costs.To overcome this challenge, proxy models are proposed, aiming to replace reservoir simulators with adequate precision. This work proposes the implementation of data-based proxies for reservoir simulators using Artificial Neural Networks (ANNs). This approach utilizes time series of well controls as inputs, generating responses for Bottom Hole Pressures (BHPs) and/or flow rates.In recent years, proxy models based on neural networks have been applied to obtain predictions of flows and/or pressures in reservoirs. For example, Recurrent Neural Networks (RNNs), specialized in handling sequential data, were used by [1] to predict water flows in the Xiluodu hydroelectric reservoir in China. Convolutional Neural Networks (CNNs), specialized in pattern recognition in images and videos, were also employed by [2] to predict pressures and flow rates of injector and producer wells, respectively.In the scope of this study, distinct neural network architectures were evaluated to predict outputs of a synthetic two-phase model with partial faults. Given the adoption of mixed controls, where producer wells are controlled by BHP and injector wells by flow rate, the use of Multihead Neural Networks [3] was also investigated. This approach allows differentiated processing of input data, contributing to more robust and efficient learning.For each considered architecture, the impact of the number of timesteps in the samples and their size on the accuracy of predictions was analyzed. The results indicate that parallel hybrid architectures exhibit the best performance, forming a complementary mutual network where each architecture contributes with different learning approaches. Additionally, a higher number of samples contributes to reducing result dispersion, while an increase in the number of timesteps does not show a significant contribution to reducing mean error.We gratefully acknowledge the support provided by PETROBRAS, ANP, FINEP, PRH, EMBRAPII, FACEPE, CNPq, and CAPES, which has been instrumental in the successful execution of this work.
Read full abstract