Sort by
Machine Learning-Based Resilience Modeling and Assessment of High Consequence Systems Under Uncertainty

Abstract This study proposes a theoretical model and assessment method for the resilience of High Consequence System (HCS), addressing the risk assessment and decision-making needs in critical system engineering activities. By analyzing various resilience theories in different domains and considering the characteristics of risk decision-making for HCS, a comprehensive theoretical model for the resilience of HCS is developed. This model considers the operational capability under normal environment (consisting of reliability and maintainability) and the safety capability under abnormal environment (consisting of resistance and emergence response ability). A case study is conducted on a spent fuel transportation packaging system, where the sealing performance after sealing ring aging is regarded as the reliability of the system and calculated using reliability methods, and impact resistance after impact is regard as resistance the impact safety of the packaging system is assessed using finite element analysis and surrogate modelling methods. The surrogate model fits the deformation output results of finite elements. Maintainability and emergency response ability are also essential elements of the resilience model for HCS facing exceptional events. The resilience variation of the spent fuel transportation packaging system is computed under the uncertainty of yielding stress of buffer material. The resilience of the packaging system is evaluated for different buffer thicknesses. The system's resilience decreases with higher uncertainty in the yielding stress of the buffer material, while it increases with thicker buffer materials. The improvement of emergency rescue ability will also lead to the improvement of system resilience.

Just Published
Relevant
Approximate Integral Method for Nonlinear Reliability Analysis

In the realm of reliability analysis methods, the First-Order Reliability Method (FORM) exhibits excellent computational accuracy and efficiency in linear problems. However, it fails to deliver satisfactory performance in nonlinear ones. Therefore, this paper proposes an Approximate Integral Method (AIM) to calculate the failure probability of nonlinear problems. Firstly, based on the Most Probable Point (MPP) of failure and the reliability index β obtained from the FORM, the Limit State Function (LSF) can be equivalent to an Approximate Parabola (AP) which divides the hypersphere space into feasible and failure domains. Secondly, through the ratio of the approximate region occupied by a parabolic curve to the entire hypersphere region, the failure probability can be calculated by integration. To avoid the computational complexity in the parabolic approximate area due to high dimensionality, this paper employs a hyper-rectangle, constructed from chord lengths corresponding to different curvatures, as a substitute for the parabolic approximate area. Additionally, a function is utilized to adjust this substitution, ensuring accuracy in the calculation. Finally, compared with the calculated result of the Monte Carlo simulation (MCS) and the FORM, the feasibility of this method can be demonstrated through five numerical examples.

Relevant
Impact of Imperfect Kolsky Bar Experiments Across Different Scales Assessed Using Finite Elements

Abstract Typical Kolsky bars are 10–20 mm in diameter with lengths of each main bar being on the scale of meters. To push 104+ strain rates, smaller systems are needed. As the diameter and mass decrease, the precision of the alignment must increase to maintain the same relative tolerance, and the potential impacts of gravity and friction change. Finite element models are typically generated assuming a perfect experiment with exact alignment and no gravity. Additionally, these simulations tend to take advantage of the radial symmetry of an ideal experiment, which removes any potential for modeling nonsymmetric effects, but has the benefit of reducing computational load. In this work, we discuss results from these fast-running symmetry models to establish a baseline and demonstrate their first-order use case. We then take advantage of high-performance computing techniques to generate half symmetry simulations using Abaqus® to model gravity and misalignment. The imperfection is initially modeled using a static general step followed by a dynamic explicit step to simulate the impact events. This multistep simulation structure can properly investigate the impact of these real-world, nonaxis symmetric effects. These simulations explore the impacts of these experimental realities and are described in detail to allow other researchers to implement a similar finite element (FE) modeling structure to aid in experimentation and diagnostic efforts. It is shown that of the two sizes evaluated, the smaller 3.16-mm system is more sensitive than the larger 12.7 mm system to such imperfections.

Open Access
Relevant
A Bayesian Multi-fidelity Neural Network to Predict Nonlinear Frequency Backbone Curves

Abstract The use of structural mechanics models during the design process often leads to the development of models of varying fidelity. Often low-fidelity models are efficient to simulate but lack accuracy, while the high-fidelity counterparts are accurate with less efficiency. This paper presents a multi-fidelity surrogate modeling approach that combines the accuracy of a high-fidelity finite element model with the efficiency of a low-fidelity model to train an even faster surrogate model that parameterizes the design space of interest. The objective of these models is to predict the nonlinear frequency backbone curves of the Tribomechadynamics Research Challenge benchmark structure which exhibits simultaneous nonlinearities from frictional contact and geometric nonlinearity. The surrogate model consists of an ensemble of neural networks that learn the mapping between low and high-fidelity data through nonlinear transformations. Bayesian neural networks are used to assess the surrogate model's uncertainty. Once trained, the multi-fidelity neural network is used to perform sensitivity analysis to assess the influence of the design parameters on the predicted backbone curves. Additionally, Bayesian calibration is performed to update the input parameter distributions to correlate the model parameters to the collection of experimentally measured backbone curves.

Relevant
Uncertainty Quantification of a Machine Learning Model for Identification of Isolated Nonlinearities with Conformal Prediction

Abstract Structural nonlinearities are often spatially localized, such joints and interfaces, localized damage, or isolated connections, in an otherwise linearly behaving system. Quinn and Brink [12] modeled this localized nonlinearity as a deviatoric force component. In other previous work [13], the authors proposed a physics-informed machine learning framework to determine the deviatoric force from measurements obtained only at the boundary of the nonlinear region, assuming a noise-free environment. However, in real experimental applications, the data are expected to contain noise from a variety of sources. In the present work, we explore the sensitivity of the trained network by comparing the network responses when trained on deterministic (“noise-free”) model data and model data with additive noise (“noisy”). As the neural network does not yield a closed-form transformation from the input distribution to the response distribution, we leverage the use of conformal sets to build an illustration of sensitivity. Through the conformal set assumption of exchangeability, we may build a distribution-free prediction interval for both network responses of the clean and noisy training sets. This work will explore the application of conformal sets for uncertainty quantification of a deterministic structure-preserving neural network and its deployment in a structural health monitoring framework to detect deviations from a baseline state based on noisy measurements.

Relevant
Automatic Ground-Truth Image Labeling for Deep Neural Network Training and Evaluation Using Industrial Robotics and Motion Capture

Abstract The United States Navy intends to increase the amount of uncrewed aircraft in a carrier air wing. To support this increase, carrier based uncrewed aircraft will be required to have some level of autonomy as there will be situations where a human cannot be in/on the loop. However, there is no existing and approved method to certify autonomy within Naval Aviation. In support of generating certification evidence for autonomy, the United States Naval Academy has created a training and evaluation system to provide quantifiable metrics for feedback performance in autonomous systems. The preliminary use-case for this work focuses on autonomous aerial refueling. Prior demonstrations of autonomous aerial refueling have leveraged a deep neural network (DNN) for processing visual feedback to approximate the relative position of an aerial refueling drogue. The training and evaluation system proposed in this work simulates the relative motion between the aerial refueling drogue and feedback camera system using industrial robotics. Ground truth measurements of the pose between camera and drogue is measured using a commercial motion capture system. Preliminary results demonstrate calibration methods providing ground truth measurements with millimeter precision. Leveraging this calibration, the proposed system is capable of providing large-scale data sets for DNN training and evaluation against a precise ground truth.

Relevant
Discretization Error Estimation Using the Unsteady Error Transport Equations

Abstract Computational Fluid Dynamics (CFD) has gained significant utility in the analysis of diverse fluid flow scenarios, thanks to advances in computational power. Accurate error estimation techniques are crucial to ensure the reliability of CFD simulations, as errors can lead to misleading conclusions. This study focuses on the estimation of discretization errors in time-dependent simulations, building upon prior work addressing steady-state problems Wang et al. (2020, “Error Transport Equation Implementation in the Sensei CFD Code,” AIAA Paper No. 2020-1047). In this research, we employ unsteady error transport equations (ETE) to generate localized discretization error estimates within the framework of the finite volume CFD code SENSEI. For steady-state problems, the ETE only need to be solved once after the solution has converged, whereas the unsteady ETE need to be co-advanced with the primal solve. To enhance efficiency, we adopt a one-sided temporal stencil and develop a modified iterative correction process tailored to the unsteady ETE. The time-marching schemes utilized encompass second-order accurate singly diagonally implicit Runge–Kutta (SDIRK) and second-order backward differentiation formula (BDF2), both being implicit. To rigorously assess the accuracy of our error estimates, all test cases feature known analytical solutions, facilitating order-of-accuracy evaluations. Two test cases are considered: the 2D convected vortex for inviscid flow and a cross-term sinusoidal (CTS) manufactured solution for viscous flow. Results indicate higher-order convergence rates for the 2D convected vortex test case even when iterative correction is not applied, with similar observations in the CTS case, albeit not at the finest grid levels. Although the current implementation of iterative correction exhibits lower stability compared to the primal solve, it generally enhances the discretization error estimate. Notably, after iterative correction, the discretization error estimate for the unsteady ETE achieves higher-order accuracy across all grid levels in the 2D CTS manufactured solution.

Relevant