Abstract
Summary Finding multiple posterior realizations through a reservoir history-matching procedure is of significance for uncertainty quantification, risk analysis, and decision making in the course of reservoir closed-loop management. Previously, an efficient, distributed, optimization algorithm—global linear-regression (GLR) with distributed Gauss-Newton (GLR-DGN)—has been proposed in the literature to iteratively minimize multiple objective functions by performing Gauss-Newton (GN) optimizations concurrently while dynamically sharing information between dispersed regions in a reduced parameter space. However, theoretically, the required number of initial training models should be larger than the number of parameters to guarantee a unique solution for the GLR equation for sensitivity-matrix estimation. This limitation makes the large-scale reservoir history-matching problem with a large amount of parameters almost intractable. We enrich the previous history-matching framework by integrating our recently proposed smooth local parameterization (SLP) and DGN for the sensitivity-matrix calculation. Motivated by the fact that one specific flow response mainly depends on a few influential or local parameters, which can be generally identified by the physical position of the wells (e.g., the parameters in the zone surrounding the wells), which is particularly true for large-scale reservoir models, this paper presents a new integration of subdomain linear-regression (SLR) with DGN, referred to as SLR-DGN. This SLP allows us to independently represent the globally spatial parameter field within low-order parameter subspaces in each subdomain. On the basis of the SLP procedure, only a few training models are required to compute local sensitivity of the response functions using a subdomain linear regression. SLP is a linear transformation with smoothness and differentiability, which makes it particularly compatible with Newton-like gradient-based optimization algorithms. Furthermore, we also introduce an adaptive scheme, named weighting smooth local parameterization (WSLP), in which the minimization algorithm adaptively determines the weighting coefficients and the optimal domain decomposition (DD) correspondingly, to mitigate the negative effects of an inappropriate DD strategy. We support our framework with numerical experiments for a four-variable toy model and a modified version of the sensitivity analysis of the impact of geological uncertainties on production (SAIGUP) model with spatially dependent parameters. Comparisons with previous GLR-DGN have shown that our new framework can generate comparable and even better results with significantly reduced computational cost. This SLP has high scalability, because the number of training models depends primarily on the number of local parameters in each subdomain and not on the dimension of the underlying full-order model. Activating more subdomains results in fewer local parameter patterns and enables us to run fewer training models. For a large-scale case study in this work, to optimize 412 global parameters, SLR-DGN needs only 100 initial-model simulations. In comparison to GLR-DGN where the parameters are defined over the entire domain, the central-processing-unit cost is reduced by a factor of several orders of magnitude, while retaining reasonable accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.