Abstract

Popular deep neural network models in artificial intelligence systems are found having catastrophic forgetting problem: when learning on a sequence of tasks, deep networks tend to only achieve high performance on the current task, while losing performance on previously learned tasks. This issue is often addressed by continual learning or lifelong learning. The majority of existing continual learning approaches adopt class incremental strategy, which will continuously expand the network structure. Representation learning, which only leverages the feature vector before classification layer, is able to maintain the model capacity in continual learning. However, recent continual representation learning methods are not well evaluated on unseen classes. In this paper, we pay attention to the performance of continual representation learning on unseen classes, and propose a novel auto-weighted latent embeddings method. For each task, autoencoders are developed to reconstruct feature maps from different levels in the neural network. The embeddings generated by these autoencoders on the manifolds are constrained when learning a new task so as to preserve the knowledge in previous tasks. An adapted auto-weighted approach is developed in this paper to assign different levels of importance to the embeddings based on reconstruction errors. Our experiments on three widely used Person Re-identification datasets expose the existence of catastrophic forgetting problem for representation learning on unseen classes, and demonstrate that our proposed method outperforms other related methods in continual representation learning setup.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call