Abstract
Continual relation extraction is a field that applies continual learning in relation extraction, which aims to train the model with new relations without losing the accurate classification of old ones. There are two significant challenges in continual Learning: catastrophic forgetting and knowledge transfer. Some previous work has shown that the memory-based method, which stores a few training examples in old tasks and retrains them while training new tasks, can solve these problems of continual learning, and it is well-performed in natural language processing tasks. However, memory-based methods tend to be overfitting and low-performed on imbalanced datasets. A combination of consistent representation learning and the prototype augmentation mechanism is proposed to solve this problem. We also reduce the real data storage by using the virtual data points, instead of using all data are the real ones. These virtual data points are generated by the prototypes of relations obtained through the data augmentation process. Experiments on FewRel and TACRED datasets reveal that our method performs better than the latest baselines in catastrophic forgetting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.