Abstract
Ensembles of predictors have been generally found to have better performance than single predictors. Although diversity is widely thought to be an important factor in building successful ensembles, there have been contradictory results in the literature regarding the influence of diversity on the generalisation error. Fundamental to this may be the way diversity itself is defined. We present two new diversity measures, based on the idea of ambiguity, obtained from the bias-variance decomposition using the cross-entropy error or the hinge-loss. If random sampling is used to select patterns on which ensemble members are trained, we find that the generalisation error is negatively correlated with diversity at high sampling rates; conversely generalisation error is positively correlated with diversity when the sampling rate is low and the diversity high. We use evolutionary optimisers for small ensembles to select the subsets of patterns for predictor training by maximising these diversity measures on training data. Evaluation of their generalisation performance on a range of classification datasets from the literature shows that the ensembles obtained by maximising the cross-entropy diversity measure generalise well, enhancing the performance of small ensembles. Regarding big ensembles, we define tree selection methods that favour ambiguous ensembles over unambiguous ensembles. Our results show that the approach that prefers ambiguous ensembles reduces the generalisation error the most and considerably reduces the number of trees needed to obtain good generalisation performance.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have