Petroleum production forecasting involves the anticipation of fluid production from wells based on historical data. Compared to traditional empirical, statistical, or reservoir simulation-based models, machine learning techniques leverage inherent relationships among historical dynamic data to predict future production. These methods are characterized by readily available parameters, fast computational speeds, high precision, and time–cost advantages, making them widely applicable in oilfield production. In this study, time series forecast models utilizing robust and efficient machine learning techniques are formulated for the prediction of production. We have fused the two-stage data preprocessing methods and the attention mechanism into the temporal convolutional network-gated recurrent unit (TCN-GRU) model. Firstly, the random forest (RF) algorithm is employed to extract key dynamic production features that influence output, serving to reduce data dimensionality and mitigate overfitting. Next, the mode decomposition algorithm, complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN), is introduced. It employs a decomposition–reconstruction approach to segment production data into high-frequency noise components, low-frequency regular components and trend components. These segments are then individually subjected to prediction tasks, facilitating the model’s ability to capture more accurate intrinsic relationships among the data. Finally, the TCN-GRU-MA model, which integrates a multi-head attention (MA) mechanism, is utilized for production forecasting. In this model, the TCN module is employed to capture temporal data features, while the attention mechanism assigns varying weights to highlight the most critical influencing factors. The experimental results indicate that the proposed model achieves outstanding predictive performance. Compared to the best-performing comparative model, it exhibits a reduction in RMSE by 3%, MAE by 1.6%, MAPE by 12.7%, and an increase in R2 by 2.6% in Case 1. Similarly, in Case 2, there is a 7.7% decrease in RMSE, 7.7% in MAE, 11.6% in MAPE, and a 4.7% improvement in R2.
Read full abstract