Abstract

Lack of explainability is hindering the practical adoption of high-performance Deep Reinforcement Learning (DRL) controllers. Prior work focused on explaining the controller by identifying salient features of the controller's input. However, these feature-based methods focus solely on inputs and do not fully explain the controller's policy. In this paper, we put forward future-based explainers as an essential tool for providing insights into the controller's decision-making process and, thereby, facilitating the practical deployment of DRL controllers. We highlight two applications of futurebased explainers in the networking domain: online safety assurance and guided controller design. Finally, we provide a roadmap for the practical development and deployment of future-based explainers for DRL network controllers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call