Abstract

Deep learning has shown great potential in geosciences, such as seismic data processing and interpretation, improving decision-making and reducing analysis time. However, deep learning faces two main challenges. First, many deep learning models rely on labeled data, which can be time-consuming to obtain. Second, the predictions from these models often lack interpretability, making it difficult to use them for high-value decisions. To address these limitations, we propose a novel workflow that eliminates the need for labeled data and enables interpretation of the results, highlighting key geological features. The proposed workflow trains a Vision Transformer (ViT) to produce six attention maps, focusing on diverse and relevant regions, by assigning higher attention values. We first train the ViT using a modified DINO (Distillation with No Labels) method specifically designed for the seismic domain and monitor for overfitting. Then, to evaluate the focus of each attention head, we use nine seismic attributes as predictor features for the assigned attention using a Gradient Boosting Model. Finally, the method samples the seismic attributes in stationary regions of the attention maps and calculates SHAP values to determine the most impactful attributes on the attention prediction. Each attention head can concentrate on unique geological features of the input seismic image, as indicated by the different relationships between SHAP values and seismic attributes. Additionally, regardless of location, each attention head can detect the same geologically significant pattern based on the attributes used. The proposed workflow enables the interpretability of the model’s importance, guided by expert knowledge through seismic attributes.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.