Abstract

Reinforcement Learning is a kind of machine learning method which learns through agents' interaction with the environment. Deep Q-Network (DQN), which is a model of reinforcement learning based on deep neural networks, succeeded in learning human-level control policies on various kinds of Atari 2600 games with image pixel inputs. Because an input of DQN is the game frames of the last four steps, DQN had difficulty on mastering such games that need to remember events earlier than four steps in the past. To alleviate the problem, Deep Recurrent Q-Network (DRQN) and Deep Attention Recurrent Q-Network (DARQN) were proposed. In DRQN, the first fully-connected layer just after convolutional layers is replaced with an LSTM to incorporate past information. DARQN is a model with visual attention mechanisms on top of DRQN. We propose two new reinforcement learning models: Deep Recurrent Q-Network with Truncated History (T-DRQN) and Deep Attention Recurrent Q-Network with Truncated History (T-DARQN). T-DRQN uses a truncated history so that we can control the length of history to be considered. T-DARQN is a model with visual attention mechanism on top of T-DRQN. Experiments of our models on six games of Atari 2600 shows a level of performance between DQN and D(A) RQN. Furthermore, results show the necessity of using past information with a truncated length, rather than using only the current information or all of the past information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call