Permeability prediction of porous media from numerical approaches is an important supplement for experimental measurements with the benefits of being more economical and efficient. However, the accuracy and reliability of traditional numerical approaches are strongly dependent on the high-resolution images of porous media, which greatly limits their implementation for engineering applications. Herein, a semi-supervised machine learning approach is proposed to predict the permeability of porous media from low-resolution images. This approach consists of an autoencoder (AE) module trained with unlabeled data to assist the backbone convolutional neural network (CNN) in the prediction by providing a mapping of the low-resolution porous media to high-resolution features. The low-resolution information from CNN trained with small amount of labeled data and high-resolution information from AE trained with larger amount of unlabeled data are comprehensively considered in this approach. The prediction performance of AE-CNN from low-resolution images is examined against the results from traditional approaches of CNN and lattice Boltzmann method (LBM) by the mean-square errors (MSE) and R-Squared (R2) calculations. Using 5-fold cross-validation method, the average value of R2 is 0.896 on the test dataset by AE-CNN, compared to 0.869 for the traditional CNN without the AE. The MSEs for AE-CNN are 0.022 and 0.064 on the training and test datasets respectively in the best-performance fold, while without AE, the MSEs for only CNN are 0.034 and 0.083 on the training and test datasets respectively in the best-performance fold, implying that AE modules can substantially improve the prediction performance from low-resolution images of porous media. As for the simulation results of LBM approach, its prediction reliability (average R2: 0.42; MSE: 0.37 and 0.36 in the best-performance fold) is extremely lower than that of CNN-based machine learning algorithms owing to huge numerical error at the blurred boundaries of low-resolution images.
Read full abstract