Abstract
Due to the lack of systematic empirical analyses and comparisons of ideas and methods, a clearly established state of the art is still missing in the optimization-based design of robot swarms. In this article, we propose an experimental protocol for the comparison of fully automatic design methods. This protocol is characterized by two notable elements: 1) a way to define benchmarks for the evaluation and comparison of design methods and 2) a sampling strategy that minimizes the variance when estimating their expected performance. To define generally applicable benchmarks, we introduce the notion of mission generator: a tool to generate missions that mimic those a design method will eventually have to solve. To minimize the variance of the performance estimation, we show that, under some common assumptions, one should adopt the sampling strategy that maximizes the number of missions considered—a formal proof is provided as the supplementary material. We illustrate the experimental protocol by comparing the performance of two offline fully automatic design methods that were presented in previous publications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.