Abstract

Real-Time Strategy (RTS) games are high-performance simulators to wars in the real life. Opponent modeling in RTS games is a challenging problem. Many approaches that handled opponent modeling problems in RTS games pretended inefficiency or were game-specific. In this paper, we present a machine learning based system for opponent modeling in RTS games that mostly automates the opponent modeling process from learning to employment. The proposed system does not depend on prior domain knowledge, and it continuously adapts according to its performance. For efficiency, the system consists of two phases: An offline phase for learning the opponent models, and an online phase for employing the learned models efficiently during the gameplay. The system was implemented and experimented in RTS game called ‘GLest’. The experiments were held for two different game maps namely ‘Angry Forest’ and ‘Dark Waters’. The obtained results revealed that the system is able to semi-automate the opponent models learning, and can create an adaptable game AI. Besides, the results indicated that incorporating the proposed opponent modeling system indeed increases the performance of the game AI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call