Abstract

This paper proposes a new methodology to determine the relative performance of optimization algorithms across various classes of instances. Rather than reporting performance based on a chosen test set of benchmark instances, we aim to develop metrics for an algorithm's performance generalized across a diverse set of instances. Instances are summarized by a set of features that correlate with difficulty, and we propose methods for visualizing instances and algorithm performance in this high-dimensional feature space. The footprint of an algorithm is where good performance can be expected, and we propose new metrics to measure the relative size of an algorithm's footprint in instance space. The methodology is demonstrated using the Traveling Salesman Problem as a case study.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call