Abstract

Online learning is about extracting information from large data streams which may be affected by changes in the distribution of the data, events known as concept drift. Concept drift detectors are small programs that try to detect these changes and make it possible to replace the base classifier, improving the overall accuracy. Ensembles of classifiers are also common in this application area and some of them are configurable with a drift detector. This article summarizes a large-scale comparison of six ensemble algorithms, configured with 10 different drift detectors, for learning from fully labeled data streams, using a large number of artificial datasets and two popular base learners in the area: Naive Bayes and Hoeffding Tree. In addition, the code of one the ensembles (Leveraging Bagging) was modified to permit its configuration with any drift detector: its original implementation only uses ADWIN. The goal is to assess how good the existing ensemble algorithms configurable with detectors really are and also to verify and challenge a common belief in the area. The results of the experiments suggest that, in most datasets, the choice of ensemble algorithm has much more impact on the final accuracy than the choice of drift detector used in its configuration. They also suggest the best auxiliary detectors to configure the ensembles, i.e. those that maximize the accuracy of the ensembles, are only marginally different from the best detectors in the same datasets in terms of their accuracies (recently reported in another article).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call