Abstract

Enabling agents to generally play video games requires to implement a common action space that mimics human input devices like a gamepad. Such action spaces have to support concurrent discrete and continuous actions. To solve this problem, this work investigates three approaches to examine the application of concurrent discrete and continuous actions in Deep Reinforcement Learning (DRL). One approach implements a threshold to discretize a continuous action, while another one divides a continuous action into multiple discrete actions. The third approach creates a multiagent to combine both action kinds. These approaches are benchmarked by two novel environments. In the first environment (Shooting Birds) the goal of the agent is to accurately shoot birds by controlling a cross-hair. The second environment is a simplification of the game Beastly Rivals On-slaught, where the agent is in charge of its controlled character’s survival. Throughout multiple experiments, the bucket approach is recommended, because it is trained faster than the multiagent and is more stable than the threshold approach. Due to the contributions of this paper, consecutive work can start training agents using visual observations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.