Abstract

Performance of evolutionary multi-objective optimization (EMO) algorithms is usually evaluated using artificial test problems such as DTLZ and WFG. Every year, new EMO algorithms with high performance on those test problems are proposed. One question is whether they also work well on real-world problems. In this paper, we try to find an answer to this question by examining the performance of ten EMO algorithms including both well-known representative algorithms and recently-proposed new algorithms. First, those algorithms are applied to five artificial test suites (DTLZ, WFG, Minus-DTLZ, Minus-WFG and MaF) and three real-world problem suites. The performance of each algorithm is evaluated by the hypervolume indicator. Next, the ranking of the ten EMO algorithms is created for each problem suite. That is, eight different rankings are obtained (each ranking is for each problem suite). Then, the eight different rankings are visually compared to answer our research question. The distance between two rankings is also calculated to support visual comparison results. Our experimental results show that similar rankings of the ten EMO algorithms are obtained for the three real-world problem suites and Minus-WFG. It is also shown that the ranking for each of the three real-world problem suites is clearly different from their ranking for DTLZ.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.