Abstract

Explaining machine learning models without any knowledge of their inner workings is an ambitious and often a necessary challenge to be solved. Local interpretable modelagnostic explanations (LIME) is undoubtedly one of the most well-known methods to deal with this issue, however its slow performance might render LIME unsuitable for industry level tasks. In this paper, we evaluate the Model-Agnostic SHAPley value explanations (MASHAP) method as a faster alternative for explaining black-box models and we compare it with LIME across a series of evaluation metrics. Our experiments1 show valid reasons why one should choose MASHAP over LIME, since it delivers roughly the same consistency in a significantly faster way.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call