Deep Neural Networks (DNNs) have achieved great success on classification tasks of closed class sets. However, new classes, like new categories of social media topics, are continual added to the real world, making it necessary to learn incrementally. This is hard for DNNs because they tend to overfit new classes while ignoring old classes, a phenomenon known as catastrophic forgetting. State-of-the-art (SOTA) methods rely on knowledge distillation and data replay techniques but still have limitations. In this work, we analyze the causes of catastrophic forgetting in class incremental learning, which refers to representation drift, representation confusion, and classifier distortion. Based on this view, we propose a two-stage learning framework with a fixed encoder and an incrementally updated prototype classifier. The encoder generates a feature space with high intrinsic dimensionality while the prototype classifier preserves the decision boundary. Experimental results on public image datasets show that our non-exemplar-based method significantly outperforms SOTA exemplar-based methods.
Read full abstract