Abstract

Abstract The assimilation of time-lapse (4D) seismic data is challenging with ensemble-based methods because of the massive number of data points. This situation requires an excessive computational time and memory usage during the model updating step. We addressed this problem using a deep convolutional autoencoder to extract the relevant features of 4D images and generate a reduced representation of the data. The architecture of the autoencoder is based on the well-known VGG-19 network, from which we take advantage of the transfer learning technique. Using a pre-trained model bypasses the need of large training datasets and avoids the high computational demand to train a deep network. For further improvements in the reconstruction of the seismic images, we apply a fine-tuning of the weights of the latent convolutional layer. We propose to use a fully convolutional architecture, which allows the application of distance-based localization during data assimilation with the Ensemble Smoother with Multiple Data Assimilation (ES-MDA). The performance of the proposed method is investigated in a synthetic benchmark problem with realistic settings. We evaluate the methodology with three variants of the autoencoder, each one with a different level of data reduction. The experiments indicate that it is possible to use latent representations with major data reductions without impairing the quality of the data assimilation. Additionally, we compare CPU and GPU implementations of the ES-MDA update step and show in another synthetic problem that the reduction in the number of data points obtained with the application of the deep autoencoder may provide a substantial improvement in the overall computation cost of the data assimilation for large reservoir models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call