Abstract

While nowadays Machine Learning (ML) algorithms have achieved impressive prediction accuracy in various fields, their ability to provide an explanation for the output remains an issue. The explainability research field is precisely devoted to investigating techniques able to give an interpretation of ML algorithms’ predictions. Among the various approaches to explainability, we focus on GLEAMS: a decision tree-based solution that has proven to be rather promising under various perspectives, but suffers a sensible increase in the execution time as the problem size grows.In this work, we analyse the state-of-the-art parallel approaches to decision tree-building algorithms and we adapt them to the peculiar characteristics of GLEAMS. Relying on an increasingly popular distributed computing engine called Ray, we propose and implement different parallelization strategies for GLEAMS. An extensive evaluation highlights the benefits and limitations of each strategy and compares the performance with other existing explainability algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call