Abstract
Algorithm selection—the mapping of problem instances to algorithms—has been successfully applied to a variety of complex theoretical and practical problems, including computer games. In this paper, we extend the traditional framework, which considers a single decision maker, to adversarial settings, by modeling algorithm selection as a normal-form game. In this “game of algorithm selection,” agents select algorithms to play a computer game on their behalf. The game's payoff matrix stores the relative performance among algorithms. We also consider nonstationary scenarios, where algorithms can learn from previous matches. We apply this approach to real-time strategy game StarCraft , using bots developed for the game as our algorithms. Our experiments suggest that minimax-Q is a suitable method for algorithm selection in both stationary and nonstationary conditions. We proceed by implementing our approach in MegaBot, a fully capable StarCraft bot. MegaBot showed robustness to nonstationarity, competing in the difficult scenario of StarCraft AI tournaments. MegaBot successfully learns how to select algorithms, exhibiting increasing win rates with tournament progress. In 2016, its debut year, it left 60% of opponents behind in two out of three tournaments. In 2017, however, MegaBot faced difficulties as its algorithm portfolio got outdated compared to newer entries.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.