Abstract

Black-box Artificial Intelligence (AI) systems for automated decision making are often based on over (big) human data, map a user’s features into a class or a score without exposing why. This is problematic for the lack of transparency and possible biases inherited by the algorithms from human prejudices and collection artefacts hidden in the training data, leading to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. This requires good communication, trust, clarity, and understanding, like any efficient collaboration. Explainable AI (XAI) addresses such challenges, and for years different AI communities have studied such topics, leading to different definitions, evaluation protocols, motivations, and results. This chapter provides a reasoned introduction to the work of Explainable AI to date and surveys the literature focusing on symbolic AI-related approaches. We motivate the needs of XAI in real-world and large-scale applications while presenting state-of-the-art techniques and best practices and discussing the many open challenges.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.