Abstract

Few-shot class-incremental learning is a step forward in the realm of incremental learning, catering to a more realistic context. In typical incremental learning scenarios, the initial session possesses ample data for effective training. However, subsequent sessions often lack sufficient data, leading the model to simultaneously face the challenges of catastrophic forgetting in incremental learning and overfitting in few-shot learning. Existing methods employ fine-tuning strategy on new session to carefully maintain a balance of plasticity and stability. In this study, we challenge this balance and design a lazy learning baseline that is more biased towards stability: pre-training a feature extractor with initial session data and fine-tuning a cosine classifier. For new sessions, we forgo further training and instead use class prototypes for classification. Experiments across CIFAR100, miniImageNet, and CUB200 benchmarks reveal our approach outperforms state-of-the-art methods. Furthermore, detailed analysis experiments uncover a common challenge in existing few-shot class-incremental learning: the low accuracy of new session classes. We provide insightful explanations for these challenges. Finally, we introduce a new indicator, separate accuracy, designed to more accurately describe the performance of methods in handling both old and new classes. Model weights and source code of our method are available at https://github.com/rumorgin/LLB.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call