Abstract

With advances in reinforcement learning (RL), agents are now being developed in high-stakes application domains such as healthcare and transportation. Explaining the behavior of these agents is challenging, as the environments in which they act have large state spaces, and their decision-making can be affected by delayed rewards, making it difficult to analyze their behavior. To address this problem, several approaches have been developed. Some approaches attempt to convey the global behavior of the agent, describing the actions it takes in different states. Other approaches devised local explanations which provide information regarding the agent's decision-making in a particular state. In this paper, we combine global and local explanation methods, and evaluate their joint and separate contributions, providing (to the best of our knowledge) the first user study of combined local and global explanations for RL agents. Specifically, we augment strategy summaries that extract important trajectories of states from simulations of the agent with saliency maps which show what information the agent attends to. Our results show that the choice of what states to include in the summary (global information) strongly affects people's understanding of agents: participants shown summaries that included important states significantly outperformed participants who were presented with agent behavior in a set of world-states that are likely to appear during gameplay. We find mixed results with respect to augmenting demonstrations with saliency maps (local information), as the addition of saliency maps, in the form of raw heat maps, did not significantly improve performance in most cases. However, we do find some evidence that saliency maps can help users better understand what information the agent relies on during its decision-making, suggesting avenues for future work that can further improve explanations of RL agents.

Highlights

  • The maturing of artificial intelligence (AI) methods has led to the introduction of intelligent systems in areas such as healthcare and transportation [69]

  • The results of this study reinforce our prior findings [5] showing that summaries generated by HIGHLIGHTS-DIV lead to significantly improved performance of participants in the agent comparison task compared to random summaries, and show that this result generalizes to reinforcement learning (RL) agents based on neural networks

  • This work is a first step toward the development of combined explanation methods for reinforcement learning (RL) agents that provide users with both global information regarding the agent’s strategy, as well as local information regarding its decision-making in specific world-states

Read more

Summary

Introduction

The maturing of artificial intelligence (AI) methods has led to the introduction of intelligent systems in areas such as healthcare and transportation [69]. Since these systems are used by people in such high-stakes domains, it is crucial. The recognition of the importance of human understanding of agents’ behavior, together with the complexity of current AI systems, have led to a growing interest in developing “explainable AI” methods [22, 29, 2]. In contrast to classical agent planning approaches such as the belief-desire-intention (BDI) framework [58] in which the goals of the agent are explicitly defined, current agents often use policies trained using complex reward functions and feature representations that are difficult for people to understand

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call