Abstract

Class incremental learning (CIL) aims to learn new classes from the data stream, where old class data is largely discarded due to data privacy or memory restrictions. A handful of exemplars cannot reflect the complete distribution of old classes, and the separation between old and new classes is hard to guarantee, which is an important cause of catastrophic forgetting. To overcome this problem, we first propose incremental semantics mining (ISM) to reduce the misclassification between old and new classes by excluding the semantics of old classes from the representation of new classes. Then, we propose a distillation-based representation expansion strategy to encode the incremental semantics into an additional representation space. Compared to the standard representation expansion strategy, our method features lower memory overhead and computational costs. In addition, an old model queue is proposed to facilitate the maintenance of earlier knowledge. Extensive experiments on CIFAR-100 and ImageNet datasets demonstrate the superiority of our method in both performance and parameter efficiency. Several state-of-the-art results are established under different incremental settings. Code: https://github.com/zihuanqiu/ISM-Net

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call