Abstract

To understand the complex nature of the Artificial Intelligence (AI) model, the model needs to be more trustable, transparent, scalable, understandable, and explainable. The trust of the AI model is concluded based on the decision taken by the AI model in its black box environment. Thus, Explainable AI (XAI) helps the developers to understand how the AI model behaves/performs while making a particular decision. With more complex AI models, scientists face difficulty in understanding the model outcome. Hence, XAI is required to explain the decision-making process of an AI model. However, to build trust-based AI models, organization embeds ethical principles in the AI processes. In our research paper, we studied the case of the banking sector where an inefficient onboarding process fails to establish a customer-based relationship. Due to the inefficient onboarding process, banks lose users’ faith which creates a gap in the customer-based relationship and hampers the onboarding process. To bridge this gap, we explain the decision-making process of the AI model through XAI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call