Abstract

Class incremental learning (CIL) aims to continuously learn new classes while maintaining discrimination for old classes with sequentially coming data. Due to the lack of old-class samples, existing CIL methods fail to learn discriminative representations for both old and new classes simultaneously, resulting in a severe performance drop in old classes, which is the well-known catastrophic forgetting phenomenon. Different from most existing works, we facilitate CIL by learning generic feature representations that perform well in seen and unseen classes. Specifically, we prove that representations with a substantial number of significant singular values benefit CIL via better old knowledge reservation. However, the overly uniform singular value spectrum will hurt the discrimination of current tasks. Furthermore, we propose that increasing the embedding dimension can enhance the number of significant singular values and validate this assumption from two perspectives: adopting different pooling techniques and devising a wider network. Meanwhile, we also prove that satisfactory current task accuracy and old knowledge reservation can be achieved simultaneously. Finally, the simple yet effective generic feature representation regulation (GFR) is devised and incorporated into two baselines. Extensive experiments are conducted on CIFAR100, ImageNet-Subset, and ImageNet. The results show that the proposed method boosts the performance of both baselines with a large margin (2.00%-9.58% on CIFAR100, 0.68%-7.10% on ImageNet-Subset and 1.18%-5.04% on ImageNet) which outperforms existing SOTAs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call