Abstract

It is widely acknowledged that biological intelligence is capable of learning continually without forgetting previously learned skills. Unfortunately, it has been widely observed that many artificial intelligence techniques, especially (deep) neural network (NN)-based ones, suffer from catastrophic forgetting problem, which severely forgets previous tasks when learning a new one. How to train NNs without catastrophic forgetting, which is termed continual learning, is emerging as a frontier topic and attracting considerable research interest. Inspired by memory replay and synaptic consolidation mechanism in brain, in this article, we propose a novel and simple framework termed memory recall (MeRec) for continual learning with deep NNs. In particular, we first analyze the feature stability across tasks in NN and show that NN can yield task stable features in certain layers. Then, based on this observation, we use a memory module to keep the feature statistics (mean and std) for each learned task. Based on the memory and statistics, we show that a simple replay strategy with Gaussian distribution-based feature regeneration can recall and recover the knowledge from previous tasks. Together with the weight regularization, MeRec preserves weights learned from previous tasks. Based on this simple framework, MeRec achieved leading performance with extremely small memory budget (only two feature vectors for each class) for continual learning on CIFAR-10 and CIFAR-100 datasets, with at least 50% accuracy drop reduction after several tasks compared to previous state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call