Conditional Past Experience Generation for Dark Continual Learning

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Continual learning (CL) aims to learn a sequence of tasks without forgetting. Numerous efforts have been made to tackle CL including data-centric, model-centric, and algorithm-centric methods. The more information the algorithm can obtain from previous tasks, e.g., the training data, the easier the CL task will be. However, few studies focus on the most difficult setting, i.e., dark CL (DCL) where only the model of the last task can be obtained. DCL is a typical setting in real-world applications, e.g., Cl tasks based on the models trained on private or privileged data. For solving DCL, we propose a novel recursive generalization bound, which can also be applied to arbitrary Traditional CL (TCL). To minimize the bound proposed, we propose a novel method, i.e., conditional past experience generation (CPEG), which reconstructs the previous conditional training data in the DCL setting. In the experiment, we apply CPEG to a wide range of benchmarks. The experimental results show that CPEG significantly reduces forgetting. On the other hand, CPEG can be used as a regularization term for any CL baseline. We also conduct experiments on the TCL setting. The performance of almost all baselines is improved, especially for the most difficult class-incremental tasks.

Save Icon
Up Arrow
Open/Close