Abstract

As Pre-trained language models (LMs) play an important role in various Natural Language Processing (NLP) tasks, it is becoming increasingly important to make sure the knowledge learned from LMs is valid and correct. Unlike conventional knowledge bases, LMs implicitly memorize knowledge in their parameters, which makes it harder to correct if some knowledge is incorrectly inferred or obsolete. The task of Knowledge Editing is to correct errors in language models, avoiding the expensive overhead associated with retraining the model from scratch. While existing methods have shown some promising results, they fail on multi-edits as they ignore the conflicts between these edits.In the paper, we propose a novel framework to divide-and-conquer edits with parallel Editors. Specifically, we design explicit and implicit multi-editor models to learn diverse editing strategies in terms of dynamic structure and dynamic parameters respectively, which allows solving the conflict data in an efficient end-to-end manner.Our main findings are: (i) State of the art Knowledge Editing methods with multiple editing capability, such as MEND and ENN, can hardly outperform the fine-tuning method; (ii) Our proposed models outperform the fine-tuning method over the two widely used datasets for Knowledge Editing; (iii) Additional analytical experiments verify that our approach can learn diverse editing strategies, thus better adapting to multiple editing than state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call