Abstract

Deep neural networks perform better in most specific single tasks than humans, but it is hard to handle a sequence of new tasks from different domains. The deep learning-based models always need to remember the parameters of the learned tasks to perform well in the new tasks and forfeit the ability to generalize from previous data, which is inconsistent with human learning. We propose a novel lifelong learning framework that can guide the model to learn new knowledge without forgetting the old knowledge through learning the similarity representation based on meta-learning. Specifically, we employ a cross-domain triplets network (CDTN) by minimizing the maximum mean discrepancy (MMD) between the current task and the knowledge base to learn the domain invariant similarity representation between tasks in different domains. Furthermore, we add a self-attention module to enhance the extraction of similarity features. Secondly, a spatial attention network (SAN) which can assign different weights according to the learned similarity representation of tasks, is proposed in addition. The experimental results show that our method effectively reduces catastrophic forgetting compared with the state-of-the-art methods when learning many tasks. Moreover, we show that the proposed method can hardly forget the old knowledge while continuously enhancing the performance of the old tasks, which is more in line with the human way of learning.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.