Abstract

Abstract Time-lapse, or 4D, seismic seeks to image changes in the reservoir and/or the overburden caused by production. Confidence in interpreting seismic changes depends on good seismic repeatability in areas unaffected by production, driving ever-greater control over seismic acquisition parameters. We show that two repeatability metrics, normalized root-mean-square error and predictability, have complementary sensitivities to causes of data mis-match and are therefore useful for monitoring both our ability to repeat seismic acquisition and to guide the processing of the data volumes. We review the properties of the two metrics and illustrate their properties and interpretation using numerical and real data examples. Introduction 4D data are increasingly used to study production-induced changes in the seismic response of a reservoir as part of a reservoir management program. However, residual differences in the repeated time-lapse data that do not represent changes in the subsurface geology impact the effectiveness of the method. These differences depend upon many factors such as signature variation, acquisition geometry, and recording fidelity differences between the two surveys. Such factors may be regarded as contributing to the time-lapse noise and any effort designed to improve the timelapse signal-to-noise ratio must address the quantifiable repeatability of the seismic survey. Although there are counter-examples6, minimization of the acquisition footprint and repeatability of the geometry to equalize residual footprints in both surveys are considered to be important. Optimisation of repeatability has been a key objective in the development of point receiver acquisition systems9, with shotby- shot signature estimation and active streamer positioning to monitor and control as many acquisition parameters as possible. In this paper, which further develops previous analysis10, we examine the use of two repeatability metrics in assessing the similarity of pairs of datasets ranging from synthetic data to repeatability field trials and finally to a 4D case study. Repeatability metrics One commonly used metric to quantify the likeness of two traces is the normalized RMS difference of the two traces, at and bt within a given window t1-t2 : the RMS of the difference divided by the average RMS of the inputs, and expressed as a percentage:(Mathematical equation available in full paper) and N is the number of samples in the interval t1-t2. The values of NRMS are not intuitive and are not limited to the range 0 to 100%. For example, if both traces contain random noise, the NRMS value is 141% (v2). If both traces anti-correlate (i.e., 180° out of phase), or if one trace contains only zeros, the NRMS error is 200%, the theoretical maximum. If one trace is half the amplitude of the other, the NRMS error is 66.7%. Predictability is another measure of repeatability and is equivalent to the coherence of White, who used it to quantify the spectral match between synthetic seismograms and seismic traces as the proportion of power on the seismic trace that can be predicted by linear filtering the synthetic trace. Here, it is defined in terms of correlations. It is the summed squared cross-correlation within a time window divided by the summed product of the autocorrelations, expressed as a percentage:(Mathematical equation available in full paper) where ?ab denotes the cross-correlation between traces at and bt computed within the time windowt1-t2. Expressed as a percentage, predictability values lie in the range 0-100%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call