Abstract

Ensemble methods have shown to be more effective than monolithic classifiers, in particular when diversity holds among their components. How to enforce diversity in classifier ensembles has received much attention from machine learning researchers, yielding a variety of different techniques and algorithms. In this paper, a novel algorithm for ensemble classifiers is proposed, in which ensemble components are trained with focus on different regions of the sample space. In so doing, diversity is mainly a consequence of the intention to limit the scope of base classifiers. The algorithm proposed in this paper shares roots with several ensemble paradigms, in particular with random forests, as it generates forests of decision trees as well. As decision trees are trained with focus on specific subsets of the sample space, the resulting ensemble is in fact a forest of “local” trees. Comparative experimental results highlight that, on average, these ensembles perform better than other relevant kinds of ensemble classifiers, including random forests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call