Abstract
Modern deep reinforcement learning agents are capable of achieving super-human performance in tasks like playing Atari games, solely based on visual input. However, due to their use of neural networks the trained models are lacking transparency which makes their inner workings incomprehensible for humans. A promising approach to gain insights into the opaque reasoning process of neural networks is the layer-wise relevance propagation (LRP) concept. This visualization technique creates saliency maps that highlight the areas in the input which were relevant for the agents’ decision-making process. Since such saliency maps cover every possible cause for a prediction, they are often accentuating very diverse parts of the input. This makes the results difficult to understand for people without a machine-learning background. In this work, we introduce an adjustment to the LRP concept that utilizes only the most relevant neurons of each convolutional layer and thus generates more selective saliency maps. We test our approach with a dueling Deep Q-Network (DQN) agent which we trained on three different Atari games of varying complexity. Since the dueling DQN approach considerably alters the neural network architecture of the original DQN algorithm, it requires its own LRP variant which will be presented in this paper.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.