Abstract

Emissions of carbon dioxide (CO2) are a major source of atmospheric pollution contributing to global warming. Carbon geological sequestration (CGS) in saline aquifers offers a feasible solution to reduce the atmospheric buildup of CO2. The direct determination of the trapping efficiency of CO2 in potential storage formations requires extensive, time-consuming simulations. Machine-learning (ML) models offer a complementary means of determining trapping indexes, thereby reducing the number of simulations required. However, ML models have to date found it difficult to accurately predict two specific reservoir CO2 indexes: residual-trapping index (RTI) and solubility-trapping index (STI). Hybridizing ML models with optimizers (HML) demonstrate better RTI and STI prediction performance by selecting the ML model’s hyperparameters more precisely. This study develops and evaluates six HML models, combining a least-squares-support-vector machine (LSSVM) and a radial-basis-function neural network (RBFNN) with three effective optimizer algorithms: genetic (GA), cuckoo optimization (COA), and particle-swarm optimization (PSO). 6810 geological-formation simulation records for RTI and STI were compiled from published studies and evaluated with the six HML models. Error and score analysis reveal that the HML models outperform standalone ML models in predicting RTI and STI for this dataset, with the LSSVM-COA model achieving the lowest root mean squared errors of 0.00421 and 0.00067 for RTI and STI, respectively. Sensitivity analysis identifies residual gas saturation and permeability as the most influential input variables on STI and RTI predictions. The high RTI and STI prediction accuracy achieved by the HML models offers to reduce the uncertainties associated with CGS projects substantially.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call