Abstract

Surrogates are essential in surrogate-assisted evolutionary algorithms (SAEAs) for solving expensive optimization problems. Gaussian processes (GPs) are often used as surrogates for their accuracy in prediction and ability to quantify prediction uncertainty. However, calculating the inverse and determinant of the covariance matrix in GPs is computationally expensive because of its cubic complexity. To tackle the issue, this paper proposes a scalable GP with hyperparameters sharing based on transfer learning. A linear predictor adaptively transfers the hyperparameters knowledge from the source full GPs (FGPs) to the target GP. The transfer is performed probabilistically based on the similarity between the distributions of the training data of FGPs and target GP. In this way, the number of building FGPs is significantly reduced. As a result, the optimization cost of the algorithms is also reduced. The scalable GP can be used in SAEAs to solve expensive optimization problems. The effectiveness of the proposed method is confirmed through testing on expensive benchmark problems and a real-world antenna design problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call