Abstract

Objective value estimation based on computationally efficient surrogate models is widely used to reduce the computational cost in solving expensive multiobjective optimization problems (MOPs). However, due to the scarcity of training data and the lack of data sharing between training tasks in a surrogate-based system, the estimation effectiveness of the surrogate models might not be satisfactory. In this study, we present a novel surrogate methodology based on information transfer to deal with this problem. Particularly, in the proposed framework, the objectives of an MOP that may have little apparent similarity or correlation are linearly mapped to a number of related tasks. Afterward, the related tasks are used to train a multitask Gaussian process (MTGP). MTGP expands the training data leading to more confident learning of the parameters of the model. The predicted values of the objective functions can be obtained by a reverse mapping from the learned MTGP model. In this way, the computational burden of the expensive objective functions of an MOP can be substantially reduced while maintaining good estimation accuracy. MTGP facilitates mutual information transfer across tasks, avoids learning from scratch for new tasks, and captures the underlying structural information between tasks. The proposed surrogate approach is merged into MOEA/D to address MOPs. Experimental tests under various scenarios indicate that the resultant algorithm outperforms other state-of-the-art surrogate-based multiobjective optimization algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call