Deep reinforcement learning (DRL) has achieved great success in recent years by combining the feature extraction power of deep learning and the decision power of reinforcement learning techniques. In the literature, Convolutional Neural Networks (CNN) is usually used as the feature extraction method and recent studies have shown that the performances of the DRL algorithms can be greatly improved with the utilization of the attention mechanism, where the raw attentions are directly used for the decision-making. However, as is well-known, reinforcement learning is a trial-and-error process and it is almost impossible to learn an optimal policy in the beginning of the learning, especially in environments with sparse rewards, which in turn will cause the raw attention-based models can only remember and utilize the attention information indiscriminately for different areas and may focus on some task-irrelevant regions, but the focusing on such task-irrelevant regions is usually helpless and ineffective for the agent to find the optimal policy. To address this issue, we propose a gated multi-attention mechanism, which is then combined with the Deep Q-learning network (GMAQN). The gated multi-attention representation module (GMA) in GMAQN can effectively eliminate task-irrelevant attention information in the early phase of the trial-and-error process and improve the stability of the model. The proposed method has been demonstrated on the challenging domain of classic Atari 2600 games and experimental results show that compared with the baselines, our method can achieve better performance in terms of both the scores and the effect of focusing in the key regions.
Read full abstract