Abstract

Machine Learning Models are widely used in Computational Ecology. They can be applied for Species Distribution Modeling, which aims to determine the probability of occurrence of a species, given the environmental conditions. However, for ecologists, these models are considered as "black boxes", since basic Machine Learning knowledge is necessary to interpret them. Thus, in this work four Explainable Artificial Intelligence techniques - Local Interpretable Model-Agnostic Explanation (LIME), SHapley Additive exPlanations (SHAP), BreakDown and Partial Dependence Plots - were evaluated to the Random Forests classifier for Coragyps atratus in the Amazon Basin region. It was found that the SHapley Additive exPlanations technique and Partial Dependence Plots are able to improve the explainability of the model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call