Abstract

Concept drift is one of the key challenges that incremental learning needs to deal with. So far, a lot of algorithms have been proposed to cope with it, but it is still difficult to response quickly to the change of concept. In this paper, a novel method named Selective Transfer Incremental Learning (STIL) is proposed to deal with this tough issue. STIL uses a selective transfer strategy based on the well-known chunk-based ensemble algorithm. In this way, STIL can adapt to the new concept of data well through transfer learning, and prevent negative transfer and overfitting that may occur in the transfer learning effectively by an appropriate selective policy. The algorithm was evaluated on 15 synthetic datasets and three real-world datasets, the experiment results show that STIL performs better in almost all of the datasets compared with five other state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call