Abstract

Since traditional machine learning (ML) techniques use black-box model, the internal operation of the classifier is unknown to human. Due to this black-box nature of the ML classifier, the trustworthiness of their predictions is sometimes questionable. Interpretable machine learning (IML) is a way of dissecting the ML classifiers to overcome this shortcoming and provide a more reasoned explanation of model predictions. In this paper, we explore several IML methods and their applications in various domains. Moreover, a detailed survey of IML methods along with identifying the essential building blocks of a black-box model is presented here. Herein, we have identified and described the requirements of IML methods and for completeness, a taxonomy of IML methods which classifies each into distinct groupings or sub-categories, is proposed. The goal, therefore, is to describe the state-of-the-art for IML methods and explain those in more concrete and understandable ways by providing better basis of knowledge for those building blocks and our associated requirements analysis.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.