Abstract

AbstractIn the current Artificial Intelligence (AI) landscape, addressing explainability and interpretability in Machine Learning (ML) is of critical importance. In fact, the vast majority of works on AI focus on Deep Neural Networks (DNNs), which are not interpretable, as they are extremely hard to inspect and understand for humans. This is a crucial disadvantage of these methods, which hinders their trustability in high-stakes scenarios. On the other hand, interpretable models are considerably easier to inspect, which allows humans to test them exhaustively, and thus trust them. While the fields of eXplainable Artificial Intelligence (XAI) and Interpretable Artificial Intelligence (IAI) are progressing in supervised settings, the field of Interpretable Reinforcement Learning (IRL) is falling behind. Several approaches leveraging Decision Trees (DTs) for IRL have been proposed in recent years. However, all of them use goal-directed optimization methods, which may have limited exploration capabilities. In this work, we extend a previous study on the applicability of Quality–Diversity (QD) algorithms to the optimization of DTs for IRL. We test the methods on two well-known Reinforcement Learning (RL) benchmark tasks from OpenAI Gym, comparing their results in terms of score and “illumination” patterns. We show that using QD algorithms is an effective way to explore the search space of IRL models. Moreover, we find that, in the context of DTs for IRL, QD approaches based on MAP-Elites (ME) and its variant Covariance Matrix Adaptation MAP-Elites (CMA-ME) can significantly improve convergence speed over the goal-directed approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call