Abstract

Players are often considered to be static in their preferred play styles, but this is often untrue. While in most games this is not an issue, in games where experience managers (ExpMs) control the experience, a shift in a player's preferences can lead to loss of engagement and churn. When an ExpM makes changes to the game world, the game world is now biased in favor of the current player model which will then influence how the ExpM will observe the player's actions, potentially leading to a biased and incorrect player model. In these situations, it is beneficial for the ExpM to recalculate the player model in an efficient manner. In this paper we show that we can use the techniques used to solve multi-armed bandits along with our own idea of distractions to minimize the time it takes to identify what a player's preferences are after they change, compensate for the bias of the game world, and to minimize the number of intrusive elements added to the game world. To evaluate these claims, we use a text-only interactive fiction environment specifically created to be experience managed and to exhibit bias. Our experiments show that multi-armed bandit algorithms can quickly recalculate a player model in response to shifts in a player's preferences compared to several baseline methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.