Abstract

Artificial Intelligence (AI) is becoming fundamental in almost all activity sectors in our society. However, most of the modern AI techniques (e.g., Machine Learning – ML) have a black box nature, which hinder their adoption by practitioners in many application fields. This issue raises a recent emergence of a new research area in AI called Explainable artificial intelligence (XAI), aiming at providing AI-based decision-making processes and outcomes to be easily understood, interpreted, and justified by humans. Since 2018, there has been an exponential growth of research studies on XAI, which has justified some review studies. However, these reviews currently focus on proposing taxonomies of XAI methods. Yet, XAI is by nature a highly applicative research field, and beyond XAI methods, it is also very important to investigate how XAI is concretely used in industries, and consequently derive the best practices to follow for better implementations and adoptions. There is a lack of studies on this latter point. To fill this research gap, we first propose a holistic review of business applications of XAI, by following the Theory, Context, Characteristics, and Methodology (TCCM) protocol. Based on the findings of this review, we secondly propose a methodological and theoretical framework in six steps that can be followed by all practitioners or stakeholders for improving the implementation and adoption of XAI in their business applications. We particularly highlight the need to rely on domain field and analytical theories to explain the whole analytical process, from the relevance of the business question to the robustness checking and the validation of explanations provided by XAI methods. Finally, we propose seven important future research avenues.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call