Abstract

Continual Learning (CL) studies the problem of developing a robust model that can learn new tasks while retaining previously learned knowledge. However, the current CL methods exclusively focus on data with annotations, disregarding that unlabelled data is the mainstream in real-world applications. To close this research gap, this study concentrates on continual self-supervised learning, which is plagued by challenges of memory over-fitting and class imbalance. Besides, these challenges are exacerbated throughout incremental training. Aimed at addressing these challenges from both loss and data perspectives, we introduce a framework, Adaptive Self-supervised Continual Learning (ASCL). Specifically, we devise an Adaptive Sharpness-Aware Minimization (ASAM) module responsible for identifying flatter local minima in the loss landscape with a smaller memory over-fitting risk. Additionally, we design an Adaptive Memory Enhancement (AME) module responsible for rebalancing self-supervised loss with new and old tasks from a data perspective. Finally, the adaptive mechanisms in AME and ASAM modules dynamically adjust the loss landscape sharpness and memory enhancement strength with the feedback of intermediate training results. The results of our extensive experiments demonstrate the state-of-the-art performance of our methods in continual self-supervised learning scenarios across multiple datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call