Abstract

The authors evaluate the extent to which a user’s impression of an AI agent can be improved by giving the agent the ability of self-estimation, thinking time, and coordination of risk tendency. The authors modified the algorithm of an AI agent in the cooperative game Hanabi to have all of these traits, and investigated the change in the user’s impression by playing with the user. The authors used a self-estimation task to evaluate the effect that the ability to read the intention of a user had on an impression. The authors also show thinking time of an agent influences impression for an agent. The authors also investigated the relationship between the concordance of the risk-taking tendencies of players and agents, the player’s impression of agents, and the game experience. The results of the self-estimation task experiment showed that the more accurate the estimation of the agent’s self, the more likely it is that the partner will perceive humanity, affinity, intelligence, and communication skills in the agent. The authors also found that an agent that changes the length of thinking time according to the priority of action gives the impression that it is smarter than an agent with a normal thinking time when the player notices the difference in thinking time or an agent that randomly changes the thinking time. The result of the experiment regarding concordance of the risk-taking tendency shows that influence player’s impression toward agents. These results suggest that game agent designers can improve the player’s disposition toward an agent and the game experience by adjusting the agent’s self-estimation level, thinking time, and risk-taking tendency according to the player’s personality and inner state during the game.

Highlights

  • An AI agent cooperating with a human player in a cooperative game cannot create a good gaming experience unless the human player sees the agent as a worthy partner

  • When the agent’s own information estimation succeeded with high probability, it was confirmed that the sense of intimacy, intelligence, and communication skills that the other party feels toward the agent is as high as that in the case of a human teammate

  • It has been found that agents that change the length of their thinking time according to the priority of their actions give players the impression that they are distressed compared to agents with a certain amount of thinking time

Read more

Summary

Introduction

An AI agent cooperating with a human player in a cooperative game cannot create a good gaming experience unless the human player sees the agent as a worthy partner. An AI agent capable of such positive player actions is Emerging Cooperative Impression in Hanabi considered to contribute to cooperation in the sense of making a good impression on users, as well as directly contributing to scores. One of the most famous studies examining Hanabi’s theoretical solutions was the work of Cox et al (2015) They took Hanabi’s problem as a hat guessing task (Butler et al, 2009) and found that they got an average score of 24.7 in a five player game. A game in which different agents cooperate with each other is excellent for dealing with issues related to agent “theory of mind” and “cooperation” such as the intention recognition (Walton-Rivers et al, 2017; Rabinowitz et al, 2018)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.