Abstract
Abstract Borehole image interpretation aims to evaluate the dip, azimuth and aperture of natural fractures and mechanical features detected along the well such as, drilling induced fractures, breakouts, etc. and to classify them based on their characteristics. Traditionally, the rule-based approaches are used for handling this task, which use manually engineered features as image representation and give a set of rules to infer the understanding of fractures and breakouts structural parameters from borehole images. However, traditional approaches can only handle ‘simple-and-easy’ cases in borehole image interpretation. Digging geological knowledge from data to gain a more comprehensive understanding of structural features remains un-solved problem. We introduce a dip picking approach based on deep neural networks. Compared to conventional data-driven approaches, like SVM or AdaBoost, deep models can better handle the complex borehole image interpretation. This is because: Deep networks generalize well even while being over-parameterized, un-regularized and fitting the training data to zero error, while traditional machine learning approaches suffer from severe overfitting.Deep networks allow computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction, thus they can best capture the high-level semantics, which should be the key for borehole image interpretation. Generally, we convert the single-step dip picking from borehole images into a two-stage approach, including scene parsing and interpretation. In the first stage, rather than directly using exiting deep models for scene parsing (e.g., U-Net or HR-Net), we design a novel multi-branch parsing model to better handle the data imbalance problem in borehole data. Our core idea is to use a shared backbone network for common feature extraction, and task-specific branches for specific classes, such as stylolite, natural fractures, drilling induced fractures and breakout. Moreover, a reverse attention module is used for information propagation across different branches. In this way, the network parameters in each branch can be trained with different number of data, and importantly, the useful information from other branches can be easily utilized through reverse attention without extra manual labels. In the second stage, we use traditional curve fitting techniques for achieving an automatic dip picking. Our model is trained iteratively using a binary mask as a ground truth for each image, with "0" representing background and "1" pixels belonging to the class, using a loss function that penalizes the mismatch between the binary mask and the map produced by the CNN. At the end of training, the CNN module infers the best probability for each pixel in the image. A class-specific threshold is defined such that pixels above the thresholds are assigned to the class, while pixels below the thresholds are assigned to the background. We used the Binary Cross Entropy (BCE) loss to optimize our model such that it can best distill the geological knowledge from data. Our novelty lies in a novel multi-branch deep model with reverse attention module. This is the first time to mine the geological knowledge using a task-specific deep branches and employ the reverse attention for information communication. Once the backbone is trained, the parameters of task-specific branches can be updated with limited data without suffering from the data imbalance issue. Such network architecture has not been explored before in borehole images interpretation. Comprehensive experiments and visualizations confirm that our model can achieve state-of-the-art performance. Specifically, according to the AUC score, our model achieves the average of 81.7% on our household dataset (11 features and 6 wells), greatly outperforming U-Net by 8.4% and HR-Net by 7.9%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.