Abstract

Reservoir characterization is often used to guide field developments and therefore requires the ‘best’ seismic data possible. Furthermore, at the end of the data processing, one should make sure that the propagating wavelet has constant properties: its amplitude, phase and bandwidth should remain as stable as possible across the area and target depth interval. There is therefore a trade-off between the wavelet’s ‘best’ characteristics required and its stability. These conditions also apply to the incidence and azimuthal angle dimensions of the dataset for dedicated reservoir characterization workflows. To insure that a processing sequence will lead to a dataset that meets the above-stated requirements, adequate quality control should be performed at numerous key processing stages. First, desired directions for improvement should be defined with the interpreter and translated into the relevant seismic attributes. Attributes which can be mapped are privileged, so that the lateral variations of the wavelet characteristics can be visualized, confronted with other interpretative information and, hopefully, its stationary behavior quantified. Then, milestones should be set at relevant steps of the processing sequence to quantify the selected attributes with intermediate migrated 3D seismic volumes. Finally, relative scores can be established to monitor the ongoing processing quality improvement and to eventually compare it with a vintage dataset if available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call