Abstract

Recognizing destinations of a maneuvering agent is important in real time strategy games. Because finding path in an uncertain environment is essentially a sequential decision problem, we can model the maneuvering process by the Markov decision process (MDP). However, the MDP does not define an action duration. In this paper, we propose a novel semi-Markov decision model (SMDM). In the SMDM, the destination is regarded as a hidden state, which affects selection of an action; the action is affiliated with a duration variable, which indicates whether the action is completed. We also exploit a Rao-Blackwellised particle filter (RBPF) for inference under the dynamic Bayesian network structure of the SMDM. In experiments, we simulate agents’ maneuvering in a combat field and employ agents’ traces to evaluate the performance of our method. The results show that the SMDM outperforms another extension of the MDP in terms of precision, recall, andF-measure. Destinations are recognized efficiently by our method no matter whether they are changed or not. Additionally, the RBPF infer destinations with smaller variance and less time than the SPF. The average failure rates of the RBPF are lower when the number of particles is not enough.

Highlights

  • In the recent decades, many commercial real time strategy (RTS) games such as Star Craft and War Craft become more and more popular

  • To prove the effectiveness of the semi-Markov decision model (SMDM), precision, recall, and F-measure of the recognition results are three statistical metrics computed by both the SMDM and abstract hidden Markov model (AHMM)-CTP

  • To continue the inferring process of AHMM with Changeable Top-level Policy (AHMMCTP), we will set all weights 1/Np forcedly when the weights are all zero, where Np is the number of particles

Read more

Summary

Introduction

Many commercial real time strategy (RTS) games such as Star Craft and War Craft become more and more popular. A key problem in developing these games is to create AI players who can recognize the intentions of their opponents. A typical and significant intention in RTS games is the destination of a maneuvering player. If the AI players can recognize the destination with observed traces of opponents, they can prepare for the defense. Because of these benefits, some recognizing methods have been applied in some digital games. Like the intention recognition, recognizing the destination of a maneuvering agent usually consists of three steps: formalization, parameter estimation, and destination inference [2]. We need to note that these parameters can be estimated by some machine learning algorithms or counting [3]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call