Abstract
Many-objective optimization problems (MaOPs) are challenging tasks involving optimizing many conflicting objectives simultaneously. Decomposition-based many-objective evolutionary algorithms have effectively maintained a balance between convergence and diversity in recent years. However, these algorithms face challenges in accurately approximating the complex geometric structure of irregular Pareto fronts (PFs). In this paper, an information entropy-driven evolutionary algorithm based on reinforcement learning (RL-RVEA) for many-objective optimization with irregular Pareto fronts is proposed. The proposed algorithm leverages reinforcement learning to guide the evolution process by interacting with the environment to learn the shape and features of PF, which adaptively adjusts the distribution of reference vectors to cover the PFs structure effectively. Moreover, an information entropy-driven adaptive scalarization approach is designed in this paper to reflect the diversity of nondominated solutions, which facilitates the algorithm to balance multiple competing objectives adaptively and select solutions efficiently while maintaining individual diversity. To verify the effectiveness of the proposed algorithm, the RL-RVEA compared with seven state-of-the-art algorithms on the DTLZ, MaF, and WFG test suites and four real-world MaOPs. The results of the experiments demonstrate that the suggested algorithm provides a novel and practical method for addressing MaOPs with irregular PFs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.