Abstract

Artificial intelligence–based systems are developed and successfully used for applications like home appliances, defense systems, virtual assistance, robotics, self-driving vehicles, and many more. Their success lies in accurate and timely decision-making ability. But the other side of these systems is a lack of transparency that can be described as black box. Due to the opaque nature of existing artificial intelligence systems, researchers are not able to interpret the decisions that have been derived from given input situations. The lack of openness not only causes the end users to resist trusting the system but also tends to make it difficult for machine learning engineers to detect and mitigate the fault in case of failure in deriving desired output. The solution is to open the black box working nature of the system and provide required explanations as well interpretations to making the whole processes humanly understandable and meaningful. This chapter focuses on the need for explainable artificially intelligent systems, present paradigms that exist to achieve it, along with various forms of explanations expected by different stakeholders and challenges in the field of making transparent systems in the direction of trustworthy human–computer interaction.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.