Abstract
Multi-objective optimization problems can be solved by giving a set of preferences in some evolutionary multi-objective optimization algorithms(EMOAs). Each potential solution is ranked by considering the contribution to the preference. R2-EMOA is an EMOA which is on the foundation of decision makers preferences. However, the preference of R2-EMOA can better balance the convergence and diversity. How to build preferences and adaptively modify decision makers preference get few attention. In this paper, we propose an adaptive R2-EMOA, called CL-AR2-EMOA, which incorporates the Coulomb law to generate weight vectors and the adaptive strategy with the feature of Pareto front. The original weight vectors generation strategy, which two weight vectors can be seen as like charges in the objective space, is based on Coulomb law. The adaptive strategy is introduced to dynamically interacts information between the weight vectors and the scales of objective functions. The performance of CL-AR2-EMOA is evaluated using standard unconstraint benchmark problems, i.e., bi-objective and tri-objective WFG test instances and DTLZ1-DTLZ4 for 3, 5, 8 and 10 objectives. Our experimental results show that the proposed CL-AR2-EMOA performs competitively with respect to other R2-EMOAs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.