Abstract

A procedure combining experiments and deep learning is demonstrated to acquire pore-scale images of oil- and water-wet surfaces over a large field of view in microfluidic devices and to classify wettability based upon these pore scale images. Deep learning supplants the manual, time-consuming, error-prone investigation and categorization of such images. Image datasets were obtained by visualizing the distribution of immiscible phases (n-decane and water) within in-house fabricated micromodels containing sandstone-type and carbonate-type pore structures. The reference dataset consists of 6400 color images binned into four classes for sandstone (water- or oil-wet surfaces) and carbonate (water- or oil-wet surfaces) pore-network patterns. There are 1600 images per class. During 10 sequential training and testing runs of the deep-learning algorithm, 3000, 100, and 100 images were randomly assigned per each rock pattern as the training, validation, and test sets, respectively. We trained and optimized both a Fully Connected Neural Network (FCN) and Convolutional Neural Network (ConvNet) using the image data. The ConvNet performs better as 5 and 8 layers are implemented, as expected. The FCN shows an average test set accuracy for binary surface wettability classification of 87.4% for sandstone rock type and 98.7% for carbonate rock type pore networks. Distinctive heterogeneity in the carbonate rock type and its relevant phase saturation profile resulted in a better prediction accuracy. The best ConvNet models shows an average test set accuracy of binary surface wettability classification of 99.4 ± 0.1% for both sandstone-type and carbonate pore networks. Heterogenous pore sizes and an abundance of small pores amplify the effects of wetting and aid identification. Overall, the test set accuracy for the simultaneous classification of four classes including both sandstone (water- or oil-wet) and carbonate rock pattern (water- or oil-wet) is 98.5% with an 8-layer ConvNet. Performance of the deep-learning model is further interpreted using saliency maps that indicate the degree to which each pixel in the image affects the classification score. Pixels at and adjacent to interfaces are most important to classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call