Abstract

Ensemble learning is a type of machine learning, typically supervised learning, that combines the decisions of multiple individual models to improve the classification or regression accuracy. Since their introduction 2 decades ago, ensemble models have been widely used not only in academia but also in practical applications, particularly in data science competitions such as Kaggle because they excel with tabular and structured data. However, understanding the decision mechanisms of such large models is a challenge. In the explainable artificial intelligence (XAI) literature, several studies have attempted to make ensemble models more transparent. Although deep learning has recently attracted significant attention in XAI research, it is still important to introduce methods that help explain ensemble models. This chapter introduces the problem of explaining the ensemble models in detail. Two of the most popular ensemble approaches, bagging and boosting, are first introduced, and then the main factors that make ensemble models difficult to explain are discussed. Later, a taxonomy of ensemble interpretation methods and some representative techniques for each category is also presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call