Abstract

The design optimization process relies often on computational models for analysis or simulation. These models must be validated to quantify the expected accuracy of the obtained design solutions. It can be argued that validation of computational models in the entire design space is neither affordable nor required. In previous work, motivated by the fact that most numerical optimization algorithms generate a sequence of candidate designs, we proposed a new paradigm where design optimization and calibration-based model validation are performed concurrently in a sequence of variable-size local domains that are relatively small compared to the entire design space. A key element of this approach is how to account for variability in test data and model predictions in order to determine the size of the local domains at each stage of the sequential design optimization process. In this article, we discuss two alternative techniques for accomplishing this: parametric and nonparametric bootstrapping. The parametric bootstrapping assumes a Gaussian distribution for the error between test and model data and uses maximum likelihood estimation to calibrate the prediction model. The nonparametric bootstrapping does not rely on the Gaussian assumption providing; therefore, a more general way to size the local domains for applications where distributional assumptions are difficult to verify, or not met at all. If distribution assumptions are met, parametric methods are preferable over nonparametric methods. We use a validation literature benchmark problem to demonstrate the application of the two techniques. Which technique to use depends on whether the Gaussian distribution assumption is appropriate based on available information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call