Abstract

This paper presents a new method for objectively measuring the explainability of textual information, such as the outputs of Explainable AI (XAI). We introduce a metric called Degree of Explainability (DoX), drawing inspiration from Ordinary Language Philosophy and Achinstein’s theory of explanations. It assumes that the degree of explainability is directly proportional to the number of relevant questions that a piece of information can correctly answer. We have operationalized this concept by formalizing the DoX metric through a mathematical formula, which we have integrated into a software tool named DoXpy. DoXpy relies on pre-trained deep language models for knowledge extraction and answer retrieval in order to estimate the DoX, transforming our theoretical insights into a practical tool for real-world applications. To confirm the effectiveness and consistency of our approach, we conducted comprehensive experiments and user studies with over 190 participants. These studies evaluated the quality of explanations by healthcare and finance XAI-based software systems. Our results demonstrate a correlation between increases in objective explanation usability and increments in the DoX score. These findings suggest that the DoX metric is congruent with other mainstream explainability measures. It provides a more objective and cost-effective alternative to non-deterministic user studies. Thus, we discuss the potential of DoX as a tool to evaluate the legal compliance of XAI systems. By bridging the gap between theory and practice in Explainable AI, our work fosters transparency, understandability, and legal compliance. DoXpy and related materials have been made available online to ensure reproducibility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call