Abstract

The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use-case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante-hoc and post-hoc xAI methods, assessing their applicability in real-world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision-making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call