Abstract

In the realm of online continual learning, models are expected to adapt to an ever-changing environment. One of the most persistent hurdles in this adaptation is the mitigation of a phenomenon called "Catastrophic Forgetting" (CF). This critical condition occurs when models trained on non-identically distributed data lose performance in previously learned tasks. Rehearsal methods, leveraging the ability to replay older samples, aim to address this challenge by incorporating a buffer of past training samples. However, the absence of known task boundaries complicates the adaptation of current CF mitigation methods. This paper proposes a method attuned to data stream characteristics and online model performance in a resource-constrained environment. The number of training iterations and learning rate emerges as crucial hyperparameters, impacting the efficacy and efficiency of online continual learning. Up to this point, we propose a combination of Experience Replay methodologies, a Drift Detector, and various training convergence policies, specially tailored for scenarios with unknown task boundaries. Experimental results demonstrate the effectiveness of our approach, maintaining or enhancing performance compared to baseline methods, while significantly improving computational efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.