Abstract

Concept drift poses significant challenges in the fields of machine learning and data mining. At present, many existing algorithms struggle to maintain low error rates or require excessive computational resources to achieve satisfactory classification results when addressing concept drift. To address these issues, a novel mathematical model is first introduced to mitigate the degradation of classification performance caused by concept drift. Then, based on the proposed objective function, a continuous kernel learning method is employed to adapt to potential changes in data distribution as new samples continuously arrive. Furthermore, we propose an ensemble learning approach leveraging the majority voting strategy to enhance classification performance in non-stationary environments. Finally, a theoretical analysis of the proposed algorithm is conducted. Experimental results demonstrate that the proposed algorithm not only achieves lower error rates and reduced memory consumption but also operates more efficiently than most of the state-of-the-art algorithms when processing different types of data streams.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call