Abstract

Multiobjective multitasking optimization (MTO) is an emerging research topic in the field of evolutionary computation. In contrast to multiobjective optimization, MTO solves multiple optimization tasks simultaneously. MTO aims to improve the overall performance of multiple tasks through knowledge transfer among tasks. Recently, MTO has attracted the attention of many researchers, and several algorithms have been proposed in the literature. However, one of the crucial issues, finding useful knowledge, has been rarely studied. Keeping this in mind, this article proposes an MTO algorithm based on incremental learning (EMTIL). Specifically, the transferred solutions (the form of knowledge) will be selected by incremental classifiers, which are capable of finding valuable solutions for knowledge transfer. The training data are generated by the knowledge transfer at each generation. Furthermore, the search space of the tasks will be explored by the proposed mapping (among tasks) approach, which helps these tasks to escape from their local Pareto Fronts. Empirical studies have been conducted on 15 MTO problems to assess the effectiveness of EMTIL. The experimental results demonstrate that EMTIL works more effectively for MTO compared to the existing algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call