Abstract
Artificial Intelligence (AI) has become an important component of many software applications. It has reached a point where it can provide complex and critical decisions in our life. However, the success of most AI-powered applications is based on black-box approaches (e.g., deep neural networks), which can create learned models that are able to predict and make decisions. While these advanced models could achieve high accuracy, they are generally unable to explain their decisions (e.g., predictions) to users. As a result, there is a pressing need for explainable machine learning systems in order to be trustworthy by governments, organizations, industries, and users. This paper classifies and compares the main findings in the domain of explainable machine learning and deep learning. We also discuss the application of Explainable AI (XAI) in sensitive domains such as cybersecurity. In addition, we characterize each reviewed article on the basis of the methods and techniques used to achieve XAI. This, in turn, allows us to discern the strengths and limitations of the existing XAI techniques. We finally discuss some substantial challenges and future research directions related to XAI.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.