Abstract

Developing Non-Player Characters, i.e., game characters that interact with the game’s environment autonomously with flexibility to experiment different behavior configurations is not a trivial task. Traditionally, this has been done with techniques that limit the complexity of the behavior of Non-Player Characters (NPCs), as in the case of using Navigation Mesh (NavMesh) for navigation behaviors. For this problem, it has been shown that reinforcement learning can be more efficient and flexible than traditional techniques. However, integrating reinforcement learning into current game development tools is laborious, given that a great deal of experimentation and coding is required. For that, we have developed a modeling environment that integrates with a game development tool and allows the direct specification of reward functions and NPC agent components with maximum code reuse and automatic code generation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call