Abstract

Incremental learning requires a learning model to learn new tasks without forgetting the learned tasks continuously. However, when a deep learning model learns new tasks, it will catastrophically forget tasks it has learned before. Researchers have proposed methods to alleviate catastrophic forgetting; these methods only consider extracting features related to tasks learned before, suppression to extract features for unlearned tasks. As a result, when a deep learning model learns new tasks incrementally, the model needs to learn to extract the relevant features of the newly learned task quickly; this requires a significant change in the model’s behavior of extracting features, which increases the learning difficulty. Therefore, the model is caught in the dilemma of reducing the learning rate to retain existing knowledge or increasing the learning rate to learn new knowledge quickly. We present a study aiming to alleviate this problem by introducing self-supervised learning into incremental learning methods. We believe that the task-independent self-supervised learning signal helps the learning model extract features not only effective for the current learned task but also suitable for other tasks that have not been learned. We give a detailed algorithm combining self-supervised learning signals and incremental learning methods. Extensive experiments on several different datasets show that self-supervised signal significantly improves the accuracy of most incremental learning methods without the need for additional labeled data. We found that the self-supervised learning signal works best for the replay-based incremental learning method.

Highlights

  • T HE long-term goal of artificial intelligence is to build an agent that can act autonomously for a long time; which requires the agent to continuously learn new tasks to adapt to the changing environment and deal with various unknown new objects after training and deployed in the natural environment

  • As the cache size increased to 1000, the average accuracy of class-IL is improved by 5.13% and 1.86% in the CIFAR-10 and Tiny ImageNet datasets respectively, this shows that the accuracy of the self-supervised signal on the incremental learning model continues to improve with the continuous increase of the cache size

  • Taking advantage of the fact that selfsupervised learning and incremental learning have nothing to do with specific tasks, we add self-supervised learning to the incremental learning method to smooth the enormous changes in feature extraction behavior due to model learning in different tasks reducing the difficulty of learning

Read more

Summary

INTRODUCTION

T HE long-term goal of artificial intelligence is to build an agent that can act autonomously for a long time; which requires the agent to continuously learn new tasks to adapt to the changing environment and deal with various unknown new objects after training and deployed in the natural environment. We discussed the effect of combining selfsupervised learning with the current main different types of incremental learning methods, and we have done many experiments on CIFAR-10 and Tiny ImageNet datasets and found that self-supervised learning has a better effect on alleviating catastrophic forgetting in incremental learning methods based on sample replay than based on parameter isolation. Where θ∗ is the model parameter after learning the previous task, F and Ω record the importance of network weights in different forms, Yo and Yo record the network output of the previous and current models on the data of current task respectively Replay based methods, such as ER [16], [18], Dark Experience(DER) [21], sample part of the training task data to the cache M, and use the data in the cache instead of D1 . Where the Rot(·|m) function rotates the image by m degrees, and the function fθy(·) predicts the probability that the model result is y , refer to [8] we set K to 4

ALGORITHM DESCRIPTION
EXPERIMENTS
EVALUATION PROTOCOL
Method
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.