Abstract

The best-performing machine learning (ML) models suffer from a lack of interpretability. This study conducted an empirical evaluation of two interpretability techniques, global surrogate and local interpretable model-agnostic explanations (LIME). Experiments on two black boxes, a multilayer perceptron, and support vector machines were carried out on two breast cancer tabular datasets. The results show that local interpretability can work along the global one to provide insights into the model. Quantitative evaluations show that the global surrogate slightly outperforms LIME. Interpretability techniques have the potential to fix the interpretability trade-off for opaque models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call