Geologic carbon storage (GCS) is a practical solution to mitigate the impact of climate change and achieve net-zero carbon emissions. However, due to complex subsurface characteristics and operational constraints, GCS can potentially induce seismic events and leakage to groundwater resources. Moreover, predicting subsurface response in pressure and saturation changes during GCS requires high-fidelity models with high computational times. Machine learning (ML) offers a promising solution to alleviate these challenges. Yet, ML-driven approaches in subsurface physics face significant obstacles, such as the substantial computational resources required for training large-scale models and the accuracy requirement as a reliable surrogate model. To tackle these issues, we propose an improved neural operator (INO) that has improved a DeepONet architecture to handle a realistic reservoir model efficiently, with computational efficiency and high accuracy.Our proposed INO framework has three key benefits: precision, adaptability, and cost-efficiency. For precision, it achieves an average root mean square error (RMSE) of 0.05% (1.5 psi) compared to the average reservoir pressure (3,150 psi) and a RMSE value of 0.016 in void fraction for CO2 saturation among testing cases. Although it is sensitive to high-error areas with steep state variable gradients, proper domain decomposition, ensemble methods, and/or integrating domain-specific knowledge could improve this limitation. For adaptability, the INO framework can be trained with a subset of the computational domain during each backpropagation to achieve better training adaptability, even for problems with high-dimensional inputs (i.e., heterogeneous fields). For example, it can be trained with random subsampling of only 0.2% of the full domain but can predict pressure and saturation in any space and time within the entire domain. For cost efficiency, training procedures with a subset of the whole computational domain significantly improve computational time. For example, for 90 training cases with 1.73 million cells and 50-time steps, the overall training time was about 2.5 h using a single GPU. A trained INO model takes 1 s per evaluation over 1.73 million and 50 time stamps, much faster than a high-fidelity simulator. Overall, the proposed INO approach with these improved benefits will enable us to perform large-scale GCS applications, enhancing the potential of ML models in the subsurface and other multiphysics problems.
Read full abstract