Abstract

Modern exploration for oil is heavily dependent on seismic data, which is used to detect and estimate properties of subsurface reservoirs. Seismic data is processed by applying a sequence of algorithms to filter, estimate velocity fields, beamform, and invert for reservoir properties. There are multiple algorithms to choose from at each processing step—ranging from simple and inexpensive to complex and costly. The Bayesian Cramér–Rao bound, combined with a statistical model of the signal and noise in seismic data, can be used to predict the information content of a specific data set at each stage in a proposed processing flow. This allows the exploration manager to identify the least-cost processing flow to achieve a certain information goal and to identify when the goal is unachievable at any cost. It can also guide research efforts to those processing steps where the greatest amount of information is being lost.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call