Abstract

In this letter, we propose a continual learning approach for a set of sequential scene classification tasks, where each task contains a group of land-cover classes. Our aim is to learn new tasks in a continual way without significantly degrading the performances of the old ones, due to the tricky catastrophic forgetting problem inherent to neural networks. To this end, we propose a neural architecture composed of two trainable modules. The first module learns its weights by discriminating between the land-cover classes within the new task while keeping trace of the old ones. On the other side, the second module tries to maximize the separation between the tasks by learning on task-prototypes stored in a linear memory (one prototype per task). The experimental results on two scene data sets (Merced and Optimal31) confirm the promising capability of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call