Abstract
Aiming at developing new insights into quantitative methods for the validation of computational model prediction, this paper investigates four types of methods, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. We classify validation experiments into two categories: (1) fully characterized (all the model/experimental inputs are measured and reported as point values), and (2) partially characterized (some of the model/experimental inputs are not measured or are reported as intervals/distributions). Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Formulations and implementation details are outlined for both equality and interval hypotheses. It is shown that Bayesian interval hypothesis testing, the reliabilitybased method, and the area metric-based method can account for the existence of directional bias, where the mean predictions of a numerical model may be consistently below or above the corresponding experimental observations. It is also found that under some specific conditions, the Bayes factor metric in Bayesian equality hypothesis testing and the reliability-based metric can both be mathematically related to the p-value metric in classical hypothesis testing. Numerical studies are conducted to apply the above validation methods to gas damping prediction for radio frequency (RF) micro-electro-mechanical-system (MEMS) switches. The model of interest is a general polynomial chaos (gPC) surrogate model constructed based on expensive runs of a physics-based simulation model, and validation data are collected from fully characterized experiments.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have