Abstract

In general, performance comparison results of optimization algorithms depend on the parameter specifications in each algorithm. For fair comparison, it may be needed to use the best specifications for each algorithm instead of using the same specifications for all algorithms. This is because each algorithm has its best specifications. However, in the evolutionary multi-objective optimization (EMO) field, performance comparison has usually been performed under the same parameter specifications for all algorithms. Especially, the same population size has always been used. In this paper, we discuss this practice from a viewpoint of fair comparison of EMO algorithms. First, we demonstrate that performance comparison results depend on the population size. Next, we explain a new trend of performance comparison where each algorithm is evaluated by selecting a pre-specified number of solutions from the examined solutions (i.e., by selecting a solution subset with a pre-specified size). Then, we discuss the selected subset size specification. Through computational experiments, we show that performance comparison results do not strongly depend on the selected subset size while they depend on the population size.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.