Summary This paper evaluates reservoir performance forecasting. Actual field examples are discussed, comparing past forecasts with observed performances. The apparently weak correlation between advances in technology and forecasting accuracy is assessed. Parallel planning is presented as an approach that can significantly accelerate reservoir forecasts. The recognition of inevitable forecasting uncertainties constitutes the philosophical basis of parallel planning. Introduction We say that reservoir performance forecasting is not an exact science would be an understatement. Even with all the significant advances occurring across a wide spectrum of related areas, questions still remain regarding the reliability of reservoir predictions. In fact, our efforts today are aimed as much at defining the limits of uncertainty envelopes as at producing forecasts. The discussion here pursues the following questions:What are realistic accuracy expectations in performance forecasts? andAre our conventional thought processes in modeling inherently ill-structured to produce rapid forecasts? By their very nature, EOR processes introduce additional levels of complexity to forecasting. This discussion relates mainly to conventional reservoir systems. Forecasting Methods and Uncertainty Methods and Limits. Current reservoir performance forecasting methods can be classified into two broad categories: empirical and mathematical. This paper focuses on finite-difference methods because they represent the predominant industry-wide vehicle in reservoir evaluations. Empirical methods, such as decline curves, are useful, yet they have a limited application domain. Continuation of past production practices and mechanisms is a precondition for forecast reliability. Likewise, hybrid methods, while suitable for a wideclass of problems (e.g., miscible, pattern floods) have not yet fully matured to offer a universal forecasting capability. Lorenz recognized the stochastic nature and hence the inherent limitations of weather forecasting. Lorenz's celebrated "butterfly effect" example points to inevitable limits of predictability. The analogies between weather and reservoirs have been noted; specifically, the sensitivity of reservoir performance and hence forecasts to certain geologic parameters (i.e., flow boundary conditions) have been highlighted. This sensitivity suggests that performance forecasts will remain uncertain indefinitely. Both internal and external reservoir factors contribute to forecast uncertainties (Fig. 1). When model forecasts diverge from actual performance, distinctions among primary causes are sometimes lost. For example, accurate models may produce apparently poor forecasts when presumed field management strategies and facility outlays are not actually implemented as a result of external factors. When model forecasts duplicate actual performance, this can also be misinterpreted as model validation. In fact, the duplication could simply reflect compensating errors among the internal and external factors. The point here is that accurate forecasts do not mean accurate models. (The hypothetical corollary also appears noteworthy: poor forecasts do not necessarily equate to poor reservoir models.) The nature of the oil industry limits predictability of external factors, such as exact field operating practices. At best, multiple forecasts need to be developed for a range of external factors. Of the four uncertainty causes in Fig. 1, data quality and mathematical solutions are becoming less pronounced, and reservoir characterization and scale-up present the primary obstacles to improving performance forecasts. The lack of determinism in both external and internal factors suggests only the obvious:all reservoir performance forecasts carry a band of uncertainty. Ballin et al. attempted to quantify this uncertainty for a special class of problems. Haldorsen and Damsleth described a general methodology for producing stochastic forecasts. Discretization Both geostatistical and finite-difference models discretize reservoirs. The two models, however, have dissimilar discretization scales (geostatistical models use inches to several feet; finite-difference models use hundreds of feet). Current and projected hardware and software limitations suggest that the discretization gap between finite-difference and geostatistical models will not disappear for giant fields. Consider the multibillion-barrel ATL/INB field in West Africa. A finite-difference model using 1-ft3 cells will require about 0.5trillion cells. The corresponding figure for the Safaniya field in the MiddleEast is about 7 trillion cells. Cells (1 in.3) will suggest models with roughly800 trillion cells for the ATL/INB field. These numbers imply our indefiniteneed for a scale-up process and hence the resulting uncertainties. Homogenization An obvious outcome of the scale-up process is homogenization. Porosity/permeability transforms, often used to describe permeability fields, also contribute to homogenized property assignments in simulation models. Fig.2 gives the porosity/permeability core data for the ATL/INB field. This field exhibits a complex lithology of predominantly silica sands intermixed with dolomite. The use of a single-variable transform, represented by the solidline, filters the observed variability in the core data. An alternative approach that would reduce the homogenization effect is the use of "cloudtransforms" developed by Kasischke and Williams. Cloud transforms produceproperty representations in models that can mimic distributions observed in real data (e.g., cores or logs). Fig. 3 shows a sample distribution generated by a cloud transform for the Elk Hills 26 R reservoir. JPT P. 652^
Read full abstract