Abstract
Recently, pretrained language models, such as Bert and XLNet, have rapidly advanced the state of the art on many NLP tasks. They can model implicit semantic information between words in the text. However, it is solely at the token level without considering the background knowledge. Intuitively, background knowledge influences the efficacy of text understanding. Inspired by this, we focus on improving model pretraining by leveraging external knowledge. Different from recent research that optimizes pretraining models by knowledge masking strategies, we propose a simple but general method to transfer explicit knowledge with pretraining. To be specific, we first match knowledge facts from a knowledge base (KB) and then add a knowledge injunction layer to a transformer directly without changing its architecture. This study seeks to find the direct impact of explicit knowledge on model pretraining. We conduct experiments on 7 datasets using 5 knowledge bases in different downstream tasks. Our investigation reveals promising results in all the tasks. The experiment also verifies that domain-specific knowledge is superior to open-domain knowledge in domain-specific task, and different knowledge bases have different performances in different tasks.
Highlights
Substantial work has shown that pretrained models [1,2,3,4] can learn language representations over large-scale text corpora, which are beneficial for many downstream NLP tasks
(3) Our experiments show that, in domain-specific task, domain knowledge is superior to open-domain knowledge, different knowledge bases have different performances on the same downstream tasks, and the quantity of matched knowledge facts has an impact on the results. ese indicate that injecting relevant explicit knowledge is useful, but the relationship between explicit knowledge and downstream tasks is unknown and worth further study
We propose a simple but general knowledge transferring method for language model pretraining
Summary
Substantial work has shown that pretrained models [1,2,3,4] can learn language representations over large-scale text corpora, which are beneficial for many downstream NLP tasks. Given the sentence “Xiaomi was officially listed on the main board of HKEx,” the background knowledge may include Xiaomi is a science and technology company, HKEx refers to Hong Kong Exchanges and Clearing Limited, and main board is an economic term. Knowing these knowledge facts can help us better understand the word sense and the sentence topic. Our contributions in this paper are threefold: (1) proposal of a simple but general knowledge transferring method for language model pretraining, (2) proposal of K-XLNet for implementation of the proposed method on XLNet, and (3) empirical verification of the effectiveness of K-XLNet on various downstream tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.