Abstract

Over the last two decades, the machine learning and related communities have conducted numerous studies to improve the performance of a single classifier by combining several classifiers generated from one or more learning algorithms. Bagging and Boosting are the most representative examples of algorithms for generating homogeneous ensembles of classifiers. However, Stacking has become a commonly used technique for generating ensembles of heterogeneous classifiers since Wolpert presented his study entitled Stacked Generalization in 1992. Studies that have addressed the Stacking issue demonstrated that when selecting base learning algorithms for generating classifiers that are members of the ensemble, their learning parameters and the learning algorithm for generating the meta‐classifier were critical issues. Most studies on this topic manually select the appropriate combination of base learning algorithms and their learning parameters. However, some other methods use automatic methods to determine good Stacking configurations instead of starting from these strong initial assumptions. In this paper, we describe Stacking and its variants and present several examples of application domains. WIREs Data Mining Knowl Discov 2015, 5:21–34. doi: 10.1002/widm.1143This article is categorized under: Technologies > Classification

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call