Abstract

In many video games such as role playing games (RPGs) or sports games, computer players act not only as the opponents of the human player but also as team-mates. But computer players as team-mates often behave in a way that human players do not expect, and such mismatches cause bigger dissatisfaction than in the case of computer players as opponents., One of the reasons for such mismatches is that there are several types of sub-goals or play-styles in these games and the AI players act without understanding the human player's preference about them. The purpose of this study is to propose a method for developing computer team-mate players that estimate the sub-goal preferences of the team-mate human player and act according to these preferences., For this purpose, we modeled the preferences of sub-goals as a function and decided the most likely parameters by a multi-strategy Monte-Carlo method, by referring to the past actions selected by the team-mate human player., Additionally, we evaluated the proposed method through two series of experiments, one by using artificial players with various sub-goal preferences and another one by using human players. The experiments showed that the proposed method can estimate their preferences after a few games, and can decrease the dissatisfaction of human players.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call