Abstract

In nonstationary environments, data distributions can change over time. This phenomenon is known as concept drift, and the related models need to adapt if they are to remain accurate. With gradient boosting (GB) ensemble models, selecting which weak learners to keep/prune to maintain model accuracy under concept drift is nontrivial research. Unlike existing models such as AdaBoost, which can directly compare weak learners' performance by their accuracy (a metric between [0, 1]), in GB, weak learners' performance is measured with different scales. To address the performance measurement scaling issue, we propose a novel criterion to evaluate weak learners in GB models, called the loss improvement ratio (LIR). Based on LIR, we develop two pruning strategies: 1) naive pruning (NP), which simply deletes all learners with increasing loss and 2) statistical pruning (SP), which removes learners if their loss increase meets a significance threshold. We also devise a scheme to dynamically switch between NP and SP to achieve the best performance. We implement the scheme as a concept drift learning algorithm, called evolving gradient boost (LIR-eGB). On average, LIR-eGB delivered the best performance against state-of-the-art methods on both stationary and nonstationary data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call