Abstract

Artificial intelligence (AI) has been evolved for the last 50 years. The recent AI techniques, machine learning (ML) and deep learning, have been successfully established in many fields such as recommendation, computer vision, machine translation, social media, and system diagnostics. The highly performing AI models, such as ensembles and neural networks, overwhelmed the prediction performance of traditional AI techniques such as symbolic and logic-based expert systems, but instead they turned to be increasingly difficult for humans to interpret and understand the model behavior. Yet, the explain ability of AI and the interpretability of ML are requisite when the AI-based systems are to be adopted for critical decision making in real world. How can you trust them for important decisions, if you cannot understand how AI works? Unfortunately, the systematic methods for the explainable AI (XAI) are not yet mature even in academia. This chapter will introduce the meaning of XAI, as well as the category of AI explanation and interpretation methods. In addition, the applications of XAI and the related important issues will be discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.