Abstract

Deep Reinforcement Learning technology combines Reinforcement Learning and Deep Learning, and uses the strong representation ability of Deep Learning to deal with high-dimensional input problems. However, as the first classical algorithm of Deep Reinforcement Learning, Deep Q-Network (DQN) algorithm does not make full use of some high value information in the input state, and the computational burden of network model needs to be improved. In order to solve these problems, this paper proposes an improved DQN algorithm based on Convolution Block Attention (CBA-DQN). This algorithm introduces Convolution Block Attention (CBA) on the basis of traditional DQN. By calculating the spatial and channel attention of the input state, the agent can focus on the parts with great value, so as to improve the computational burden of the algorithm. Experimental results show that CBA-DQN can improve the performance of traditional DQN algorithm in some Atari game tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call