Abstract

The performance of differential evolution (DE) algorithm significantly depends on mutation strategy. However, there are six commonly used mutation strategies in DE. It is difficult to select a reasonable mutation strategy in solving the different real-life optimization problems. In general, the selection of the most appropriate mutation strategy is based on personal experience. To address this problem, a mixed mutation strategy DE algorithm based on deep Q-network (DQN), named DEDQN is proposed in this paper, in which a deep reinforcement learning approach realizes the adaptive selection of mutation strategy in the evolution process. Two steps are needed for the application of DQN to DE. First, the DQN is trained offline through collecting the data about fitness landscape and the benefit (reward) of applying each mutation strategy during multiple runs of DEDQN tackling the training functions. Second, the mutation strategy is predicted by the trained DQN at each generation according to the fitness landscape of every test function. Besides, a historical memory parameter adaptation mechanism is also utilized to improve the DEDQN. The performance of the DEDQN algorithm is evaluated by the CEC2017 benchmark function set, and five state-of-the-art DE algorithms are compared with the DEDQN in the experiments. The experimental results indicate the competitive performance of the proposed algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.