Abstract

When building a model with learning capability, the usual hypothesis is that the data of all the possible situations or tasks are available. Nevertheless, dealing with a massive number of tasks in a sequential manner necessitates preserving the previous tasks data and then retrain the model on it, which is infeasible. Another solution is to retrain the model only on the new task data, but, in this case, the model dramatically collapses when tested on the old tasks data due to the phenomenon known as catastrophic forgetting. To overcome this shortcoming, we propose a novel continual learning technique based on the learned tasks’ data auto-generation subnetworks. We sequentially train the proposed model on a set of classification tasks, where each task includes a certain number of remote sensing scenes or classes. The proposed neural network architecture encapsulates two trainable subnetworks. The first module adapts its weights by minimizing the discrimination error between the land-cover classes during the new task learning. In parallel, the second module learns how to reproduce data of the previous tasks by discovering the latent data structure of the new task dataset. Experiments are conducted on two scene datasets (Merced and Optimal31). The experimental results confirm the outperformance and robustness of the proposed model.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.