Abstract

Owing to the superior performances, exemplar-based methods with knowledge distillation (KD) are widely applied in class incremental learning (CIL). However, it suffers from two drawbacks: 1) data imbalance between the old/learned and new classes causes the bias of the new classifier toward the head/new classes and 2) deep neural networks (DNNs) suffer from distribution drift when learning sequence tasks, which results in narrowed feature space and deficient representation of old tasks. For the first problem, we analyze the insufficiency of softmax loss when dealing with the problem of data imbalance in theory and then propose the imbalance softmax (im-softmax) loss to relieve the imbalanced data learning, where we re-scale the output logits to underfit the head/new classes. For another problem, we calibrate the feature space by incremental-adaptive angular margin (IAAM) loss. The new classes form a complete distribution in feature space yet the old are squeezed. To recover the old feature space, we first compute the included angle of normalized features and normalized anchor prototypes, and use the angle distribution to represent the class distribution, then we replenish the old distribution with the deviation from the new. Each anchor prototype is predefined as a learnable vector for a designated class. The proposed im-softmax reduces the bias in the linear classification layer. IAAM rectifies the representation learning, reduces the intra-class distance, and enlarges the inter-class margin. Finally, we seamlessly combine the im-softmax and IAAM in an end-to-end training framework, called the dual balanced class incremental learning (DBL), for further improvements. Experiments demonstrate the proposed method achieves state-of-the-art (SOTA) performances on several benchmarks, such as CIFAR10, CIFAR100, Tiny-ImageNet, and ImageNet-100.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call