Abstract
Currently, it has become very relevant that machine learning techniques can provide an explanation of the results they generate, which is even more relevant in certain domains, called critical, such as health and energy, among others. For this reason, several explainability methods have been generated in the literature. Some of them are Local Interpretable Model-Agnostic Explanations (LIME) and Feature Importance, which have been used in a wide range of problems. The objective of this work is to analyze the explainability capacity of the Learning Algorithm for Multivariate Data Analysis (LAMDA) and Fuzzy Cognitive Maps (FCM), which have been used with great interest due to their interpretability, management of uncertainty during the inference process, simplicity in its use, among other things. For the development of this work, data from two critical domains have been considered, health (COVID-19 and Dengue datasets) and energy (energy price dataset), for which prediction/classification models have been developed using LAMDA and FCM techniques. Afterward, two explainability techniques were used to analyze the explainability provided by each method, one based on the LIME method and the other on the feature importance method, the latter adapted to our work by being based on the permutation of values. Finally, the work proposes two new explainability methods, one based on causal inference and the other on the degrees of membership of the variables to the classes. The latter, in particular, allows doing an explainability analysis by class. The new explainability methods reproduce the results of well-known explainability methods in the literature such as LIME and feature importance, with less execution cost, and also, with an explainability analysis by class. This work opens the doors to new work on class explainability. Furthermore, we see that machine learning approaches based on causal or fuzzy relationships are quite self-explanatory, but specific explainability methods such as those we propose in the work allow us to study particular aspects, such as highly important variables, which general explainability methods do not allow us to do, such as LIME and feature importance.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.