Abstract

A pruned VGG19 model subjected to Axial Coronal Sagittal (ACS) convolutions and a custom VGG16 model are benchmarked to predict 3D fabric descriptors from a set of 2D images. The data used for training and testing are extracted from a set of 600 3D biphase microstructures created numerically. Fabric descriptors calculated from the 3D microstructures constitute the ground truth, while the input data are obtained by slicing the 3D microstructures in each direction of space at regular intervals. The computational cost to train the custom ACS-VGG19 model increases linearly with p (the number of images extracted in each direction of space), and increasing p does not improve the performance of the model - or only does so marginally. The best performing ACS-VGG19 model provides a MAPE of 2 to 5% for the means of aggregate size, aspect ratios and solidity, but cannot be used to estimate orientations. The custom VGG16 yields a MAPE of 2% or less for the means of aggregate size, distance to nearest neighbor, aspect ratios and solidity. The MAPE is less than 3% for the mean roundness, and in the range of 5-7% for the aggregate volume fraction and the mean diagonal components of the orientation matrix. Increasing p improves the performance of the custom VGG16 model, but becomes cost ineffective beyond 3 images per direction. For both models, the aggregate volume fraction is predicted with less accuracy than higher order descriptors, which is attributed to the bias given by the loss function towards highly-correlated descriptors. Both models perform better to predict means than standard deviations, which are noisy quantities. The custom VGG16 model performs better than the pruned version of the ACS-VGG19 model, likely because it contains 3 times (p = 1) to 28 times (p = 10) less parameters than the ACS-VGG19 model, allowing better and faster cnvergence, with less data. The custom VGG16 model predicts the second and third invariants of the orientation matrix with a MAPE of 2.8% and 8.9%, respectively, which suggests that the model can predict orientation descriptors regardless of the orientation of the input images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call