Abstract

This article, written by Special Publications Editor Adam Wilson, contains highlights of paper SPE 182617, “Field-Scale Assisted History Matching Using a Systematic, Massively Parallel Ensemble-Kalman-Smoother Procedure,” by Binghuai Lin, Paul I. Crumpton, SPE, and Ali H. Dogru, SPE, Saudi Aramco, prepared for the 2017 SPE Reservoir Simulation Conference, Montgomery, Texas, USA, 20–22 February. The paper has not been peer reviewed. This work presents a systematic and rigorous approach of reservoir decomposition combined with the ensemble Kalman smoother to overcome the complexity and computational burden associated with history matching field-scale reservoirs in the Middle East. The paper provides the formulation of the iterative regularizing ensemble Kalman smoother, introduces the use of streamline maps to facilitate domain decomposition, and presents a discussion on covariance localization. Computational-efficiency problems are addressed by three levels of parallelization. Introduction History matching, in which uncertain parameters are chosen so the reservoir model can reproduce the historical field performance, plays a key role in field development. Several techniques have been developed in the past decades to address the history-matching problem. It is widely acknowledged that a single deterministic reservoir model is not sufficient to represent a reservoir’s complex characteristics along with its uncertainty. The underlying reason is that history matching is an ill-posed inverse problem with nonunique solutions that can match the historical data. To overcome the nonuniqueness problem in the history-matching process, the ensemble Kalman filter (EnKF) has been introduced to the petroleum industry with many successful applications. The EnKF can be characterized as a Monte Carlo version of the classic Kalman filter in the sense that it uses an ensemble of samples to represent necessary statistics, such as covariance of model parameters and the correlations between model parameters and observations. An important feature of the EnKF method is that it sequentially assimilates observations when available to update the realizations in the ensemble, which includes the un-certain model parameters and primary model state variables. Hence, the EnKF is suitable for real-time data assimilation to update the ensemble continuously when new data are available. The joint update of the model parameters and state variables, however, can result in physically implausible dynamic states. Alternatively, the ensemble-smoother (ES) method updates only the model parameters with all observations simultaneously and thus avoids inconsistent dynamic-state updates. The comparison of the performance of the EnKF and ES methods has revealed that the EnKF normally outperforms the ES method. This is because the ES method purely depends on the prior ensemble and avail-able data. For highly nonlinear dynamic systems, it is not sufficient to achieve desirable performance by only one update. Also, by assimilating all observations at once, the ES is prone to overshooting and divergence. An iterative ES was developed on the basis of the Levenberg-Marquardt method of regularizing the update direction and choosing the step length. This method normally requires a significant number of iterations to converge and, thus, becomes computationally prohibitive for large-scale models. An approach was later proposed to improve the performance of the ES by assimilating the same data sets multiple times. In this iterative ES procedure, the measurement-error covariance matrix is inflated to obtain suitable updates for each iteration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call