Abstract

In data stream mining, concept drift may cause the predictions given by machine learning models become less accurate as time passes. Existing concept drift detection and adaptation methods are built based on a framework that is buffering new samples if a drift-warming level is triggered and retraining a new model if a drift-alarm level is triggered. However, these methods neglected the problem that the performance of a learning model could be more sensitive to the amount of training data rather than the concept drift. In other words, a retrained model built on very few data instances could be even worse than the old model trained before the drift. To elaborate and address this problem, we propose a fast switch Naive Bayes model (fsNB) for concept drift detection and adaptation. The intuition is to apply the idea of following the leader in online learning. We manipulate a sliding and an incremental Naive Bayes classifier, if the sliding one overwhelms the incremental one, the model reports a drift. The experimental evaluation shows the advantages of fsNB and demonstrates that retraining may not be the best options for a marginal drift.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.