Abstract

Abstract Benchmark testing offers performance measurement for an evolutionary algorithm before it is put into applications. In this paper, a systematic method to construct a benchmark test suite is proposed. A set of established algorithms are employed. For each algorithm, a uniquely easy problem instance is generated by evolution. The resulting instances consist of a novel benchmark test suite. Each problem instance is favorable (uniquely easy) to one algorithm only. A hierarchical fitness assignment method, which is based on statistical test results, is designed to generate uniquely easy (or hard) problem instances for an algorithm. Experimental results show that each algorithm performs the best robustly on its uniquely favorable problem. The testing results are repeatable. The distribution of algorithm performance in the suite is unbiased (or uniform), which mimics any subset of real-world problems that is uniformly distributed. The resulting suite offers 1) an alternative benchmark suite to evolutionary algorithms; 2) a novel method of assessing novel algorithms; and 3) meaningful training and testing problems for evolutionary algorithm selectors and portfolios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call