Abstract
Concept drift poses significant challenges in the fields of machine learning and data mining. At present, many existing algorithms struggle to maintain low error rates or require excessive computational resources to achieve satisfactory classification results when addressing concept drift. To address these issues, a novel mathematical model is first introduced to mitigate the degradation of classification performance caused by concept drift. Then, based on the proposed objective function, a continuous kernel learning method is employed to adapt to potential changes in data distribution as new samples continuously arrive. Furthermore, we propose an ensemble learning approach leveraging the majority voting strategy to enhance classification performance in non-stationary environments. Finally, a theoretical analysis of the proposed algorithm is conducted. Experimental results demonstrate that the proposed algorithm not only achieves lower error rates and reduced memory consumption but also operates more efficiently than most of the state-of-the-art algorithms when processing different types of data streams.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.