Abstract

Interpretability of Machine Learning (ML) methods and models is a fundamental issue that concerns a wide range of data mining research. This topic is not only an academic concern, but a crucial aspect for public acceptance of ML in practical contexts as well. Indeed, one should know that the lack of interpretability can be a real drawback for various application areas, such as in healthcare, biology, sociology and industrial decision support systems. In fact, an algorithm, which does not give enough information about the learner process and the learned model would be merely discarded in favor of less accurate and more interpretable approaches. Several papers have been proposed to interpret efficient models, such as Neural Networks and Random Forest, but there is still no consensus about what interpretability refers to. Interestingly, the term has been associated with different notions depending on the point of view of each author, as well as the nature of the issue being treated and the users concerned by the explanation. Therefore, this paper primarily aims to provide a painstaking overview of the aspects related to interpretability of ML learning process and resulting models, as reported by the literature, and to organize the aforementioned aspects into metrics that can be used for ML Interpretability scoring.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.