Abstract

Many metaheuristic approaches are inherently stochastic. In order to compare such methods, statistical tests are needed. However, choosing an appropriate test is not trivial, given that each test has some assumptions about the distribution of the underlying data that must be true before it can be used. Permutation tests (P-Tests) are statistical tests with minimal number of assumptions. These tests are simple, intuitive and nonparametric. In this paper, we argue researchers in the field of metaheuristics to adopt P-Tests to compare their algorithms. We define two statistic tests and then present an algorithm that uses them to compute the p-value. The proposed process is used to compare 5 metaheuristic algorithms on 10 benchmark functions. The resulting p-values are compared with the p-values of two widely used statistical tests. The results show that the proposed P-test is generally consistent with the classical tests, but more conservative in few cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call