Abstract

Multitask optimization aims to solve two or more optimization tasks simultaneously by leveraging intertask knowledge transfer. However, as the number of tasks increases to the extent of many-task optimization, the knowledge transfer between tasks encounters more uncertainty and challenges, thereby resulting in degradation of optimization performance. To give full play to the many-task optimization framework and minimize the potential negative transfer, this article proposes an evolutionary many-task optimization algorithm based on a multisource knowledge transfer mechanism, namely, EMaTO-MKT. Particularly, in each iteration, EMaTO-MKT determines the probability of using knowledge transfer adaptively according to the evolution experience, and balances the self-evolution within each task and the knowledge transfer among tasks. To perform knowledge transfer, EMaTO-MKT selects multiple highly similar tasks in terms of maximum mean discrepancy as the learning sources for each task. Afterward, a knowledge transfer strategy based on local distribution estimation is applied to enable the learning from multiple sources. Compared with the other state-of-the-art evolutionary many-task algorithms on benchmark test suites, EMaTO-MKT shows competitiveness in solving many-task optimization problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call