Abstract
A method is presented for training an Input-Output Hidden Markov Model (IOHMM) to identify a player's current goal in an action-adventure game. The goals were Explore, Fight, or Return to Town, which served as the hidden states of the IOHMM. The observation model was trained by directing the player to achieve particular goals and counting actions. When trained on first-time players, training to the specific players did not appear to provide any benefits over a model trained to the experimenter. However, models trained on these players' subsequent trials were significantly better than the models trained to the specific players the first time, and also outperformed the model trained to the experimenter. This suggests that game goal recognition systems are best trained after the players have some time to develop a style of play. Systems for probabilistic reasoning over time could help game designers make games more responsive to players' individual styles and approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.