Abstract

AbstractAssessing the empirical performance of Multi-Objective Evolutionary Algorithms (MOEAs) is vital when we extensively test a set of MOEAs and aim to determine a proper ranking thereof. Multiple performance indicators, e.g., the generational distance and the hypervolume, are frequently applied when reporting the experimental data, where typically the data on each indicator is analyzed independently from other indicators. Such a treatment brings conceptual difficulties in aggregating the result on all performance indicators, and it might fail to discover significant differences among algorithms if the marginal distributions of the performance indicator overlap. Therefore, in this paper, we propose to conduct a multivariate \(\mathcal {E}\)-test on the joint empirical distribution of performance indicators to detect the potential difference in the data, followed by a post-hoc procedure that utilizes the linear discriminative analysis to determine the superiority between algorithms. This performance analysis’s effectiveness is supported by an experimentation conducted on four algorithms, 16 problems, and 6 different numbers of objectives.KeywordsMany-objective optimizationBenchmarkingPerformance analysisPerformance indicatorsHypothesis testing

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call