Abstract

Lithofacies identification is crucial in fields such as geology, geotechnical engineering, rock mechanics, and petroleum engineering, as it reveals the physical, chemical, and mineralogical characteristics of rocks that aid in identifying oil and gas reservoir types, volumes, and productivity. Traditional methods of identifying rock types, such as manual sample identification, micrograph identification, and experimental identification, are costly and time-consuming. Several studies have endeavored to enhance the degree of lithofacies identification automation using machine learning (ML) and deep learning (DL) techniques. However, their capacity to recognize intricate data remains restricted. To overcome this challenge, we introduce CoreViT, a modified architecture of the advanced vision transformer (ViT) that includes the parallel transformer encoder (PTE) for exchanging information among image patches, and the class encoder (CE) for improved automatic lithofacies identification. Our study, focusing on core images including algal limestone, mudstone, and non-algal limestone from the Fengxi Well 1 in the Qaidam Basin, demonstrates CoreViT's average recognition accuracy at approximately 97.5% with an average loss of 0.071. Compared with other classical convolutional neural network (CNN) models, CoreViT achieves relatively high accuracy and low loss. This study highlights the superiority of the ViT model over traditional deep convolutional neural networks (DCNNs) and suggests the great potentiality of applying the ViT model in lithofacies identification in core samples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call