Abstract

This study aims to reflect on a list of libraries providing decision support to AI models. The goal is to assist in finding suitable libraries that support visual explainability and interpretability of the output of their AI model. Especially in sensitive application areas, such as medicine, this is crucial for understanding the decision-making process and for a safe application. Therefore, we use a glioma classification model’s reasoning as an underlying case. We present a comparison of 11 identified Python libraries that provide an addition to the better known SHAP and LIME libraries for visualizing explainability. The libraries are selected based on certain attributes, such as being implemented in Python, supporting visual analysis, thorough documentation, and active maintenance. We showcase and compare four libraries for global interpretations (ELI5, Dalex, InterpretML, and SHAP) and three libraries for local interpretations (Lime, Dalex, and InterpretML). As use case, we process a combination of openly available data sets on glioma for the task of studying feature importance when classifying the grade II, III, and IV brain tumor subtypes glioblastoma multiforme (GBM), anaplastic astrocytoma (AASTR), and oligodendroglioma (ODG), out of 1276 samples and 252 attributes. The exemplified model confirms known variations and studying local explainability contributes to revealing less known variations as putative biomarkers. The full comparison spreadsheet and implementation examples can be found in the appendix.

Highlights

  • In recent years, extensive benefits to different application areas have been offered due to successfully applying machine learning (ML) algorithms

  • By using the processed data from the combined studies described in the materials section, we trained a model to classify cancer subtypes by distinguishing between the Oncotree codes glioblastoma multiforme (GBM), anaplastic astrocytoma (AASTR), and ODG

  • We identified three different libraries that fit to the selection rule of the most relevant libraries which are implementations of the SHAP approach: InterpretML [28], Dalex [29], and SHAP [17]

Read more

Summary

Introduction

Extensive benefits to different application areas have been offered due to successfully applying machine learning (ML) algorithms. ML and DL establish artificial intelligence (AI) models which can be applied in many different fields of research such as healthcare [1], cancer classification [2,3,4], autonomous robots and vehicles [5], image processing [6], manufacturing, and many more [7,8,9,10], enhancing and providing various benefits in the corresponding fields These models resulting from ML are suitable for performing different tasks, such as recommendation, ranking, forecasting, classification, or clustering.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.