Abstract

It is known that deep learning models are performing beyond human beings if sufficient size of training data is provided. However, this is usually true only if a model is trained for a single task. If a model is continuously trained on multiple tasks, the model abruptly forgets the previously learned information. To overcome this phenomenon, various continual learning methods have been proposed. In particular, in the class-incremental learning scenario in which no task identification information is given at the inference time, the memory replay method has shown good performances. In general, the memory replay method stores class representative samples to recall previously learned tasks, but this leads to poor performance when a sample can be a representative of multiple classes. Therefore, this paper proposes a method that stores class boundary data as well as class representative data in the memory buffer to improve the performance of the memory replay. For this, a class representative sample is defined as the one of which feature is close to the mean of a certain class, and a class boundary sample is defined to be located between the means of two different classes. The experiments confirm that the proposed method shows higher performance than existing methods, which proves the importance of utilizing class boundary samples in continual learning.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.