Abstract

When nonlinear constraints such as field liquid or water production rate, injection pressures, etc., as functions of time need to be honored in addition to linear ones, the life-cycle production optimization problem, a component of a closed loop reservoir management, becomes challenging and computationally expensive to perform using a high-fidelity reservoir simulator with the existing gradient-based methods using the adjoint or stochastic approximate gradient methods. Therefore, the objective of this study is to present computationally efficient methods for deterministic production optimization under nonlinear constraints using a kernel-based machine learning method where the cost function is the net present value (NPV). We use the least-squares support-vector regression (LSSVR) to approximate the NPV function. To achieve computational efficiency, we generate a set of output values of the NPV and nonlinear constraint functions, which are field liquid production rate (FLPR) and water production rate (FWPR) in this study, by running the high-fidelity simulator for a broad set of input design variables (well controls) and then using the collection of input/output data to train LSSVR proxy models to replace the high-fidelity simulator to compute NPV and nonlinear state constraint functions during iterations of sequential quadratic programming (SQP). To obtain improved (higher) estimated optimal NPV values, we use the existing so-called iterative sampling refinement (ISR) method to update the LSSVR proxy so that the updated proxy remains predictive toward promising regions of search space during the optimization. Direct and indirect ways of constructing LSSVR-based NPVs as well as different combinations of input data, including nonlinear state constraints and/or the bottomhole pressures (BHPs) and water injection rates, are tested as alternative feature spaces. The results obtained from our proposed LSSVR-based optimization methods are compared with those obtained from our in-house stochastic simplex approximate gradient (StoSAG)-based line-search SQP programming (LS-SQP-StoSAG) algorithm that uses directly a high-fidelity simulator to compute the gradients of the objective function and the nonlinear state functions with StoSAG for the Brugge reservoir model. The results show that nonlinear constrained optimization with the LSSVR ISR with SQP is computationally 3.25 fold more efficient than LS-SQP-StoSAG. In addition, the results show that constructing NPV indirectly using the field liquid and water rates for a waterflooding problem where inputs come from LSSVR proxies of the nonlinear state constraints requires significantly fewer training samples than the method constructing NPV directly from the NPVs computed from a high-fidelity simulator. LSSVR has advantages for its computational efficiency which is the main goal in our research, and robustness against overfitting especially for the cases where we have limited data. However, DNNs and random forest require large training size and require much more computational resources and longer training times. Also, Random Forests and Gradient Boosting Machines can be prone to overfitting and become computationally intensive

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call