Abstract

Explainable artificial intelligence is a research topic whose relevance has increased in recent years, especially with the advent of large machine learning models. However, very few attempts have been proposed to improve interpretability in the case of quantum artificial intelligence, and many existing quantum machine learning models in the literature can be considered almost as black boxes. In this article, we argue that an appropriate semantic interpretation of a given quantum circuit that solves a problem can be of interest to the user not only to certify the correct behavior of the learned model, but also to obtain a deeper insight into the problem at hand and its solution. We focus on decision-making problems that can be formulated as classification tasks and propose a method for learning quantum rule-based systems to solve them using evolutionary optimization algorithms. The approach is tested to learn rules that solve control and decision-making tasks in reinforcement learning environments, to provide interpretable agent policies that help to understand the internal dynamics of an unknown environment. Our results conclude that the learned policies are not only highly explainable, but can also help detect non-relevant features of problems and produce a minimal set of rules.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.