Abstract

Deep Reinforcement Learning (DRL) has been effectively performed in various complex environments, such as playing video games. In many game environments, DeepMind’s baseline Deep Q-Network (DQN) game agents performed at a level comparable to that of humans. However, these DRL models require many experience samples to learn and lack the adaptability to changes in the environment and handling complexity. In this study, we propose Attention-Augmented Deep Q-Network (AADQN) by incorporating a combined top-down and bottom-up attention mechanism into the DQN game agent to highlight task-relevant features of input. Our AADQN model uses a particle-filter -based top-down attention that dynamically teaches an agent how to play a game by focusing on the most task-related information. In the evaluation of our agent’s performance across eight games in the Atari 2600 domain, which vary in complexity, we demonstrate that our model surpasses the baseline DQN agent. Notably, our model can achieve greater flexibility and higher scores at a reduced number of time steps.Across eight game environments, AADQN achieved an average relative improvement of 134.93%. Pong and Breakout games both experienced improvements of 9.32% and 56.06%, respectively. Meanwhile, SpaceInvaders and Seaquest, which are more intricate games, demonstrated even higher percentage improvements, with 130.84% and 149.95%, respectively. This study reveals that AADQN is productive for complex environments and produces slightly better results in elementary contexts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call