Abstract

Research in deep reinforcement learning (RL) has coalesced around improving performance on benchmarks like the Arcade Learning Environment. However, these benchmarks do not emphasize two important characteristics that are often present in real-world domains: requirement of changing strategy conditioned on latent contexts, and temporal sensitivity. As a result, research in RL has not given these challenges their due, resulting in algorithms which do not understand critical changes in context, and have little notion of real world time. This paper introduces the game of Space Fortress as a RL benchmark which specifically targets these characteristics. We show that existing state-of-the-art RL algorithms are unable to learn to play the Space Fortress game, and then confirm that this poor performance is due to the RL algorithms’ context insensitivity. We also identify independent axes along which to vary context and temporal sensitivity, allowing Space Fortress to be used as a testbed for understanding both characteristics in combination and also in isolation. We release Space Fortress as an open-source Gym environment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.