Abstract

Strategic decision-making in an adversarial environment is an open problem that poses major challenges in various domains. The traditional adversarial game-playing programs that work by searching in the space of game states have been actively explored in the literature for this purpose. However, most approaches proposed for playing adversarial games have been tested only in static game environments with fixed goals for winning. In this study, we extend the static environment of Hero Academy, a turn-based, multi-action, adversarial game, to a more dynamic game-playing environment, and study the behavior and performance of tree search and evolutionary algorithms in playing that game. Our simulations show that, while evolutionary algorithms continue to dominate tree search algorithms, tree search algorithms can become relatively more competitive under certain dynamic scenarios. Equally important is that evolutionary algorithms are able to alter their approach to playing and winning a game in the face of dynamic changes in the environment, whereas tree search algorithms are unable to do so. The findings of this study should contribute to further advance our understanding of strategic decision-making in adversarial situations, particularly when goals and targets may change dynamically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call