Abstract

In model development, model calibration and validation play complementary roles toward learning reliable models. In this article, we expand the Bayesian Validation Metric framework to a general calibration and validation framework by inverting the validation mathematics into a generalized Bayesian method for model calibration and regression. We perform Bayesian regression based on a user’s definition of model-data agreement. This allows for model selection on any type of data distribution, unlike Bayesian and standard regression techniques, that “fail” in some cases. We show that our tool is capable of representing and combining least squares, likelihood-based, and Bayesian calibration techniques in a single framework while being able to generalize aspects of these methods. This tool also offers new insights into the interpretation of the predictive envelopes (also known as confidence bands) while giving the analyst more control over these envelopes. We demonstrate the validity of our method by providing three numerical examples to calibrate different models, including a model for energy dissipation in lap joints under impact loading. By calibrating models with respect to the validation metrics one desires a model to ultimately pass, reliability and safety metrics may be integrated into and automatically adopted by the model in the calibration phase.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call