Abstract

In the last decade, the world has envisioned tremendous growth in technology with improved accessibility of data, cloud-computing resources, and the evolution of machine learning (ML) algorithms. The intelligent system has achieved significant performance with this growth. The state-of-the-art performance of these algorithms in various domains has increased the popularity of artificial intelligence (AI). However, alongside these achievements, the non-transparency, inscrutability and inability to expound and interpret the majority of the state-of-the-art techniques are considered an ethical issue. These flaws in AI algorithms impede the acceptance of complex ML models in a variety of fields such as medical, banking and finance, security, and education. These shortcomings have prompted many concerns about the security and safety of ML system users. These systems must be transparent, according to the current regulations and policies, in order to meet the right to explanation. Due to a lack of trust in existing ML-based systems, explainable artificial intelligence (XAI)-based methods are gaining popularity. Although neither the domain nor the methods are novel, they are gaining popularity due to their ability to unbox the black box. The explainable AI methods are of varying strengths, and they are capable of providing insights to the system. These insights can be ranging from a single feature explanation to the interpretability of sophisticated ML architecture. In this paper, we present a survey of known techniques in the field of XAI. Moreover, we suggest future research routes for developing AI systems that can be responsible. We emphasize the necessity of human knowledge-oriented systems for adopting AI in real-world applications with trust and high fidelity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call