Abstract

Task incremental learning, a setting of Continual learning, isan approach to exploit the knowledge from previous tasks for currentlynew task. Task incremental learning aims to solve two big challengesof continual learning: catastrophic forgetting and knowledge transfer orsharing between previous tasks and current task. This paper improveTask incremental learning by (1) transferring the knowledge (not thetraining data) learned from previous tasks to a new task (contrast ofmulti-task learning); (2) to maintain or even improve performance oflearned models from previous tasks with avoid forgetting; (3) to developa continual learning model based on result from (1) and (2) to applyfor aspect sentiment classification. Specifically, we combine two loss baseon contrastive learning modules from Contrastive Knowledge Sharing(CKS) for encouraging knowledge sharing between old and current tasksand improve the performance of the current task by Contrastive Super-vised learning (CSC) module. The experimental results show that ourmethod could get rid of previous learned tasks catastrophic forgettingphenomenon and outperform the previous study for aspect sentimentclassification.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.