Abstract

Data assimilation is an important discipline in geosciences that aims to combine the information contents from both prior geophysical models and observational data (observations) to obtain improved model estimates. Ensemble-based methods are among the state-of-the-art assimilation algorithms in the data assimilation community. When applying ensemble-based methods to assimilate big geophysical data, substantial computational resources are needed in order to compute and/or store certain quantities (e.g., the Kalman-gain-type matrix), given both big model and data sizes. In addition, uncertainty quantification of observational data, e.g., in terms of estimating the observation error covariance matrix, also becomes computationally challenging, if not infeasible. To tackle the aforementioned challenges in the presence of big data, in a previous study, the authors proposed a wavelet-based sparse representation procedure for 2D seismic data assimilation problems (also known as history matching problems in petroleum engineering). In the current study, we extend the sparse representation procedure to 3D problems, as this is an important step towards real field case studies. To demonstrate the efficiency of the extended sparse representation procedure, we apply an ensemble-based seismic history matching framework with the extended sparse representation procedure to a 3D benchmark case, the Brugge field. In this benchmark case study, the total number of seismic data is in the order of . We show that the wavelet-based sparse representation procedure is extremely efficient in reducing the size of seismic data, while preserving the salient features of seismic data. Moreover, even with a substantial data-size reduction through sparse representation, the ensemble-based seismic history matching framework can still achieve good estimation accuracy.

Highlights

  • Data assimilation is an important discipline in geosciences that aims to combine the information contents from both prior geophysical models and observational data to obtain improved model estimates [1]

  • To demonstrate the efficiency of the integrated workflow, we apply it to a 3D benchmark case, the Brugge field case

  • The seismic data used in this study are near- and faroffset amplitude versus angle (AVA) attributes, with the data size being more than 7 million

Read more

Summary

Introduction

Data assimilation is an important discipline in geosciences that aims to combine the information contents from both prior geophysical models and observational data (observations) to obtain improved model estimates [1]. The proposed framework consists of three key components (see Fig 3), namely, forward AVA simulation, sparse representation (in terms of leading wavelet coefficients) of both observed and simulated AVA data, and the history matching algorithm.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call