Abstract

Gaussian processes (GPs) are a kind of non-parametric Bayesian approach. They are widely used as surrogate models in data-driven optimization to approximate the exact functions. However, the cubic computation complexity is involved in building GPs. This paper proposes hyperparameters adaptive sharing based on transfer learning for scalable GPs to address the limitation. In this method, the hyperparameters across source tasks are adaptively shared to the target task by the linear predictor. This method can reduce the computation cost of building GPs without losing capability based on experimental analyses. The method's effectiveness is demonstrated on a set of benchmark problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call