Abstract

The acquisition of rock property information is at the core of regional geological survey and mineral exploration, but hand-crafted feature-based methods are heavily influenced by human prior knowledge and have limited transferability. End-to-end deep learning techniques, exemplified by convolutional neural networks (CNNs), have attained significant accomplishments in the domain of image classification. However, previous end-to-end CNN-based methods are hard to focus on image critical areas, and they also cannot make full use of global alignment dependency of the rocks. In this paper, rock image classification via a RockS2Net is proposed. The RockS2Net framework adopts a Siamese architecture, characterized by two branches that share parameters, enabling the efficient extraction of both global and local features. This approach facilitates the extraction of global features from entire images and focuses on critical areas to extract local features. The architecture of spatial transformer network (STN) is introduced to transform microscopic images of rock sections to their critical areas. By fusing local features and global features, the properties obtained from microscopic images of rock sections can be more accurately predicted. To test the robust generalizability of the proposed method, the constructed CHN-Rock images dataset is used for experiments and evaluation. Experimental results show that the accuracy of the proposed RockS2Net on the CHN-Rock image dataset is 2–3% higher than that of other rock image classification networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call