Abstract

Dynamic constrained optimization problems (DCOPs) are common and important optimization problems in real-world, which have great difficulty to solve. Dynamic constrained evolutionary algorithms (DCEAs) are widely used methods for solving DCOPs. However, existing DCEAs often struggle with convergence, particularly for DCOPs with drastic dynamic changes or intricate constraints. To address this issue, this paper proposes a novel DCEA called DCEA-DQN, which leverages the powerful perception and decision-making capabilities of Deep Q-Network (DQN). DCEA-DQN integrates two DQNs to enhance its performance. The first DQN is designed to adaptively respond to dynamic changes, enabling effective handling of DCOPs with various types and degrees of changes. It provides a high-quality re-initialized population for subsequent static optimization, resulting in faster and improved convergence. The second DQN is introduced to guide the mutation direction during offspring generation. It steers the population towards better feasible regions or directs it towards the optimal individual within the current feasible region. Moreover, a penalty mechanism is employed to handle constraints during offspring generation. To evaluate the performance of DCEA-DQN, comprehensive empirical studies are conducted using a new test suite called C-GMPB and a dynamic flexible job-shop scheduling problem. The experimental results, using two commonly used metrics EB and EO in the field of DCOPs, demonstrate that DCEA-DQN outperforms six state-of-the-art DCEAs and achieved optimal performance on 80% and 75% of all 24 test problems, respectively. The source code for DCEA-DQN is available at https://github.com/CIA-SZU/YRT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call