Dynamic multi-objective optimization problems (DMOPs) are challenging as they require capturing the Pareto optimal front (POF) and Pareto optimal set (POS) during the optimization process. In recent years, transfer learning (TL) has emerged the empirical knowledge and is an effective approach to solve DMOPs. However, negative transfer can occur when the transfer method is not suitable for the transfer task. It may deviate the search path and seriously reduce the efficiency, so how to reduce the occurrence of negative transfer to save the running time of dynamic multi-objective evolutionary algorithm (DMOEA) is an important issue to be addressed. A division-selection transfer learning evolutionary algorithm for dynamic multi-objective optimization (DST-DMOEA) is designed towards this aim. Specifically, individuals with high Spearman correlation are relatively stable in different environments, selecting them to train the Support Vector Regression (SVR) model ensures a more accurate capture of solution features, predicting the objective values of historical solutions based on the model, and thus divide historical solutions into elite and non-elite solutions. Subsequently, for the elite solutions, individual TL that incorporates local information for optimization and transfer is used, while the non-elite solutions are handled with manifold TL method to obtain the overall data distribution and understand the internal structure. Then, merge the predicted individuals generated by two parts of TL will constitute as the initial population in the optimization process. Compared with other algorithms, the initial solution of DST-DMOEA is closer to the real POF, effectively reducing negative transfer. In addition, in 51 test instances, DST-DMOEA has shown superior performance in over 30 instances.
Read full abstract