Abstract

Distribution drift is an important issue for practical applications of machine learning (ML). In particular, in streaming ML, the data distribution may change over time, yielding the problem of concept drift, which affects the performance of learners trained with outdated data. In this article, we focus on supervised problems in an online nonstationary setting, introducing a novel learner-agnostic algorithm for drift adaptation, namely (), with the goal of performing efficient retraining of the learner when drift is detected. incrementally estimates the joint probability density of input and target for the incoming data and, as soon as drift is detected, retrains the learner using importance-weighted empirical risk minimization. The importance weights are computed for all the samples observed so far, employing the estimated densities, thus, using all available information efficiently. After presenting our approach, we provide a theoretical analysis in the abrupt drift setting. Finally, we present numerical simulations that illustrate how competes and often outperforms state-of-the-art stream learning techniques, including adaptive ensemble methods, on both synthetic and real-world data benchmarks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.