Abstract

Existing class incremental learning methods typically employ knowledge distillation to minimize discrepancies in model outputs. However, these methods are restricted by the mismatch between quondam knowledge and new data. To alleviate these issues, we introduce semantic alignment decouples the classification and distillation in different semantic spaces. The unmatched new data is regarded as out-of-distribution data on the old class distribution, and the corresponding pseudo-labels are attached to the new data using the original network. Intuitively, the pseudo-labels could be consistently preserved in the old semantic space. Moreover, we develop auxiliary self-supervised classifiers to learn more generalized representation, enabling a better stability-plastic trade-off. Furthermore, self-distillation is employed to refine self-supervised knowledge from auxiliary classifiers. Extensive experiments demonstrate that our method achieves the best performance on CIFAR100, ImageNet100, ImageNet, CUB200, and Stanford-Dogs120 datasets. Notably, our method outperforms existing methods by a substantial margin when only one old exemplar is stored per class, i.e., 11.34% and 21.46% improvement on CIFAR100 of 5 phases and 10 phases, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.