Abstract

Most existing studies on continual learning (CL) consider the task-based setting, where task boundaries are known to learners during training. However, they may be impractical for real-world problems, where new tasks arrive with unnotified distribution shifts. In this article, we introduce a new boundary-unknown continual learning scenario called continuum incremental learning (CoIL), where the incremental unit may be a concatenation of several tasks or a subset of one task. To identify task boundaries, we design a continual out-of-distribution (OOD) detection method based on softmax probabilities, which can detect OOD samples for the latest learned task. Then, we incorporate it with continual learning approaches to solve the CoIL problem. Furthermore, we investigate the more challenging task-reappear setting and propose a method named continual learning with unknown task boundary (CLUTaB). CLUTaB first adopts in-distribution detection and OOD loss to determine whether a set of data is sampled from any learned distribution. Then, a two-step inference technique is designed to improve the continual learning performance. Experiments show that our methods work well with existing continual learning approaches and achieve good performance on CIFAR-100 and mini-ImageNet datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.