Abstract

AbstractLifelong learning, which attempts to alleviate catastrophic forgetting in machine learning models, is gaining increasing attention for deep neural networks. Recent lifelong learning methods which continuously append new classes in the classification layer of deep neural network suffer from the model capacity issue. Representation learning is a feasible solution for this problem. However, representation learning in lifelong learning has not been cautiously evaluated, especially for unseen classes. In this work, we concentrate on evaluating the performance of lifelong representation learning on unseen classes, and propose an effective lifelong representation learning method to match image pairs, without the need of increasing the model capacity. Specifically, we preserve the knowledge of previous tasks in the manifolds learned from multiple network layer outputs. The obtained distributions of these manifolds are further used to generate pseudo feature maps which are replayed in a combination with knowledge distillation strategy to improve the performance. We conduct the experiments on three widely used Person ReID datasets to evaluate the performance of lifelong representation learning on unseen classes. The result shows that our proposed method achieves the state-of-the-art performance compared to other related lifelong learning methods.KeywordsLifelong learningFeature replayPerson re-identifications

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call