Abstract
Estimating acoustic impedance from seismic data is a crucial step in reservoir characterization. While data-driven impedance inversion based on deep learning has shown promising results, it relies heavily on extensive well logs for labeling, which is often impractical in many exploration scenarios. Recently, the zero-shot or few-shot learning performance of Pretrained Foundation Models like Generative Pre-trained Transformer (GPT) and Mask Auto Encoder (MAE) has highlighted that knowledge learned from vast amounts of unlabeled data can be transferred to downstream tasks with minimal labeled data. However, applying Transformer-based representation learning models to 3D seismic data inversion poses three challenges: (1) Computational and memory constraints due to the high-dimensional nature of the data; (2) Difficulty in extracting fine-grained image features using Transformers, hampering high-frequency impedance inversion; (3) Fixed input size in Transformers, leading to inversion artifacts. In this work, we introduce the Seismic Mask Auto Encoder (SeisMAE), a Transformer-based representation model tailored for the inversion of 3D seismic data. It incorporates three key components: (1) Aggregated dimensionality reduction encoding to handle redundancy in seismic data, significantly improving computational efficiency; (2) Multi-scale self-attention feature fusion to enhance the model’s capacity for low-level feature representation; and (3) A stitching decoding strategy to eliminate inversion stitching artifacts. Experimental validations highlight the efficacy of our approach. On the synthetic SEAM I dataset, we demonstrate the effectiveness of each component and SeisMAE’s superior performance. For real-world data on The Netherlands F3, SeisMAE delivers reliable inversion outcomes with only four labeled examples. We compared SeisMAE against various inversion techniques, including 1D Convolutional Neural Network (1D-CNN), UNet-based, HRNet-based, and TransInver, where SeisMAE exhibited significant advantages.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.