Abstract
Multi-objective evolutionary algorithms (MOEAs) are widely employed to tackle multi-objective optimization problems (MOPs). However, the choice of different crossover operators significantly impacts the algorithm's ability to balance population diversity and convergence effectively. To enhance algorithm performance, this paper introduces a novel multi-state reinforcement learning-based multi-objective evolutionary algorithm, MRL-MOEA, which utilizes reinforcement learning (RL) to select crossover operators. In MRL-MOEA, a state model is established according to the distribution of individuals in the objective space, and different crossover operators are designed for the transition between different states. Additionally, in the process of evolution, the population still exhibits inadequate convergence in certain regions, leading to sparse areas within the regular Pareto Front (PF). To address this issue, a strategy for adjusting weight vectors has been devised to achieve uniform distribution of the PF. The experimental results of MRL-MOEA on several benchmark suites with a varying number of objectives ranging from 3 to 10, including WFG and DTLZ, demonstrate MRL-MOEA's competitiveness compared to other algorithms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.