Abstract
Abstract Tensor datatypes representing field variables like stress, displacement, velocity, etc., have increasingly become a common occurrence in data-driven modeling and analysis of simulations. Numerous methods [such as convolutional neural networks (CNNs)] exist to address the meta-modeling of field data from simulations. As the complexity of the simulation increases, so does the cost of acquisition, leading to limited data scenarios. Modeling of tensor datatypes under limited data scenarios remains a hindrance for engineering applications. In this article, we introduce a direct image-to-image modeling framework of convolutional autoencoders enhanced by information bottleneck loss function to tackle the tensor data types with limited data. The information bottleneck method penalizes the nuisance information in the latent space while maximizing relevant information making it robust for limited data scenarios. The entire neural network framework is further combined with robust hyperparameter optimization. We perform numerical studies to compare the predictive performance of the proposed method with a dimensionality reduction-based surrogate modeling framework on a representative linear elastic ellipsoidal void problem with uniaxial loading. The data structure focuses on the low-data regime (fewer than 100 data points) and includes the parameterized geometry of the ellipsoidal void as the input and the predicted stress field as the output. The results of the numerical studies show that the information bottleneck approach yields improved overall accuracy and more precise prediction of the extremes of the stress field. Additionally, an in-depth analysis is carried out to elucidate the information compression behavior of the proposed framework.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.