Abstract

Multi-surrogate assisted evolutionary algorithms, utilizing inexpensive global and local surrogate models to assist evolutionary search, have demonstrated a remarkable ability to solve expensive optimization problems (EOPs). The selected training samples of the local surrogate model used in the current algorithms may not be updated continuously, which could bring about a waste of computing resources. This paper proposes a multi-surrogate multi-tasking genetic algorithm with an adaptive training sample selection strategy (MS-MTGAwA), in which the local surrogate model is updated adaptively based on information from established local models to enhance the capability in exploiting the optimal solution. The adaptive training sample selection strategy (ATS) applies optimal points found in historical local models as training samples. In addition, MS-MTGAwA inherits the optimization framework of a multi-tasking genetic algorithm. The radial basis function model is chosen as the modeling basis of the global and local surrogate models. A set of common benchmark functions whose dimensions vary from 10 to 100 and the tension/compression spring design problem are adopted to validate the performance of the proposed algorithm. Based on the average best fitness value over thirty independent runs on the benchmark functions, MS-MTGAwA ranks first in Friedman’s test compared to five state-of-the-art algorithms including S-JADE, GORS-SSLPSO, SAHO, SAMSO, and MS-MTO. The proposed ATS can be expected to be applied in any algorithms with local surrogate to make full use of historical information of the local model to improve the prediction accuracy of the next local model.If this paper is accepted, MATLAB codes associated with this paper will be uploaded tohttps://github.com/Zhongbo-Hu/Prediction-Evolutionary-Algorithm-HOMEPAGE.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call