Abstract

Approximate mathematical models (metamodels) are often used as surrogates for more computationally intensive simulations. The common practice is to construct multiple metamodels based on a common training data set, evaluate their accuracy, and then to use only a single model perceived as the best while discarding the rest. This practice has some shortcomings as it does not take full advantage of the resources devoted to constructing different metamodels, and it is based on the assumption that changes in the training data set will not jeopardize the accuracy of the selected model. It is possible to overcome these drawbacks and to improve the prediction accuracy of the surrogate model if the separate stand-alone metamodels are combined to form an ensemble. Motivated by previous research on committee of neural networks and ensemble of surrogate models, a technique for developing a more accurate ensemble of multiple metamodels is presented in this paper. Here, the selection of weight factors in the general weighted-sum formulation of an ensemble is treated as an optimization problem with the desired solution being one that minimizes a selected error metric. The proposed technique is evaluated by considering one industrial and four benchmark problems. The effect of different metrics for estimating the prediction error at either the training data set or a few validation points is also explored. The results show that the optimized ensemble provides more accurate predictions than the stand-alone metamodels and for most problems even surpassing the previously reported ensemble approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call