Abstract

Humans gradually learn a sequence of cross-domain tasks and seldom experience catastrophic forgetting. In contrast, deep neural networks achieve good performance only in specific tasks within a single domain. To equip the network with lifelong learning capabilities, we propose a Cross-Domain Lifelong Learning (CDLL) framework that fully explores task similarities. Specifically, we employ a Dual Siamese Network (DSN) to learn the essential similarity features of tasks across different domains. To further understand similarity information across domains, we introduce a Domain-Invariant Feature Enhancement Module (DFEM) to better extract domain-invariant features. Moreover, we propose a Spatial Attention Network (SAN) that assigns different weights to various tasks based on the learned similarity features. Ultimately, to maximize the use of model parameters for learning new tasks, we propose a Structural Sparsity Loss (SSL) that can make the SAN as sparse as possible while ensuring accuracy. Experimental results show that our method effectively reduces catastrophic forgetting compared with state-of-the-art methods when continuously learning multiple tasks across different domains. It is worth noting that the proposed method scarcely forgets old knowledge while consistently enhancing the performance of learned tasks, more closely aligning with human learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call