Abstract
Learning from data streams (incremental learning) is increasingly attracting research focus due to many real-world streaming problems and due to many open challenges, among which is the detection of concept drift – a phenomenon when the data distribution changes and makes the current prediction model inaccurate or obsolete. Current state-of-the art detection methods can be roughly split into performance monitoring algorithms and distribution comparing algorithms. In this work we propose a novel concept drift detector that can be combined with an arbitrary classification algorithm. The proposed concept drift detector is based on computing multiple model explanations over time and observing the magnitudes of their changes. The model explanation is computed using a methodology that yields attribute-value contributions for prediction outcomes and thus provides insight into the model’s decision-making process and enables its transparency. The evaluation has revealed that the methods surpass the baseline methods in terms of concept drift detection, accuracy, robustness and sensitivity. To even further augment interpretability, we visualized the detection of concept drift, enabling macro and micro views of the data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.