Abstract

Dynamic multiobjective optimization problems (DMOPs) are characterized by multiple objectives that change over time in varying environments. More specifically, environmental changes can be described as various dynamics. However, it is difficult for existing dynamic multiobjective algorithms (DMOAs) to handle DMOPs due to their inability to learn in different environments to guide the search. Besides, solving DMOPs is typically an online task, requiring low computational cost of a DMOA. To address the above challenges, we propose a particle search guidance network (PSGN), capable of directing individuals' search actions, including learning target selection and acceleration coefficient control. PSGN can learn the actions that should be taken in each environment through rewarding or punishing the network by reinforcement learning. Thus, PSGN is capable of tackling DMOPs of various dynamics. Additionally, we efficiently adjust PSGN hidden nodes and update the output weights in an incremental learning way, enabling PSGN to direct particle search at a low computational cost. We compare the proposed PSGN with seven state-of-the-art algorithms, and the excellent performance of PSGN verifies that it can handle DMOPs of various dynamics in a computationally very efficient way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call