Abstract
The composite kernel learning (CKL) method is introduced to efficiently construct composite kernels for Gaussian process (GP) surrogate models with applications in engineering design. The mixture of kernel functions is cast as a weighted-sum model in which the weights are treated as extra hyperparameters to yield a higher optimum likelihood. The CKL framework aims to improve the accuracy of the GP and relieves the difficulty of kernel selection. In this paper, the combination of five kernel functions are studied, namely, Gaussian, Matérn-3/2, Matérn-5/2, exponential, and cubic, with each kernel sharing the same length scale. Numerical studies were performed on a set of engineering problems to assess the approximation capability of GP-CKL. The results show that, in general, GP-CKL yields a lower approximation error and higher robustness as compared to GP models with single kernel and model selection methods. Numerical experiments show that it is worth using the GP-CKL method for achieving higher accuracy with extra burdens on the number of hyperparameters and a longer training time. Besides, experiments were also performed with variable length scales for each kernel and each variable. However, this variant of CKL is not suggested because it is prone to overfitting and is significantly more expensive than the other GP variants.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.