Abstract

This paper discusses a new sequential adaptive design of experiments (DoE) approach for global Kriging metamodeling applications. The sequential implementation is established by using the current metamodel, formulated based on the existing experiments, to guide the selection of the optimal new experiment(s). The score function, defining the DoE objective, combines two components: (1) the metamodel prediction variability, expressed through the predictive variance, and (2) the metamodel bias, approximated through the leave-one-out cross validation (LOOCV) error. The latter is used as a weighting factor to extend traditional DoE approaches that focus solely on the metamodel prediction variability. Two such approaches are considered here, adopting either the integrated mean squared error or the maximum mean squared error as the basic component of the score function. The incorporation of bias information as weighting within these well-established approaches facilitates a direct extension of their respective computational workflows, making the proposed implementation attractive from computational perspective. An efficient optimization scheme for identification of the next experiment, as well as the balancing of exploration and exploitation between the two components of the score function, are also discussed. The incorporation of LOOCV weightings is shown to be highly beneficial in a total of six analytical and engineering examples. Furthermore, these examples demonstrate that for DoE approaches which use LOOCV information as weights, it is preferable to update the predictive variance to explicitly consider the impact of the new experiment, rather than relying strictly on the current metamodel variance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call