Abstract
In this work, we aim at equipping pre-trained language models with structured knowledge. We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs. Building upon entity-level masked language models, our first contribution is an entity masking scheme that exploits relational knowledge underlying the text. This is fulfilled by using a linked knowledge graph to select informative entities and then masking their mentions. In addition, we use knowledge graphs to obtain distractors for the masked entities, and propose a novel distractor-suppressed ranking objective that is optimized jointly with masked language model. In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training, to inject language models with structured knowledge via learning from raw text. It is more efficient than retrieval-based methods that perform entity linking and integration during finetuning and inference, and generalizes more effectively than the methods that directly learn from concatenated graph triples. Experiments show that our proposed model achieves improved performance on five benchmarks, including question answering and knowledge base completion.
Highlights
Self-supervised pre-trained language models (LMs) like ELMo (Peters et al, 2018) and BERT (Devlin et al, 2019) learn powerful contextualized representations
Depends on relational knowledge – the kind modeled by knowledge graphs 1 (KGs) – directly finetuning a pre-trained LM often yields sub-optimal results, even though some works (Petroni et al, 2019; Davison et al, 2019) show pre-trained LMs have been partially equipped with such knowledge
Inspired by distantly supervised relation extraction (Mintz et al, 2009) assuming that any sentence containing two entities can express the relation between these two entities in a KG, we argue that it is possible for an masked language models (MLMs) to learn structured knowledge from raw text if appropriately guided by a KG
Summary
Self-supervised pre-trained language models (LMs) like ELMo (Peters et al, 2018) and BERT (Devlin et al, 2019) learn powerful contextualized representations. Open questions remain about what these models have learned and improvements can be made along several directions One such direction is, when downstream task performance depends on relational knowledge – the kind modeled by knowledge graphs 1 (KGs) – directly finetuning a pre-trained LM often yields sub-optimal results, even though some works (Petroni et al, 2019; Davison et al, 2019) show pre-trained LMs have been partially equipped with such knowledge. The first line of methods retrieves a KG subgraph (Liu et al, 2019a; Lin et al, 2019; Lv et al, 2019) and/or pre-trained graph embeddings (Zhang et al, 2019b; Peters et al, 2019) via entity linking during both training and inference on downstream tasks While these methods inject domain-specific knowledge directly into language representations, they rely heavily on the performance of the linking algorithm and/or the quality of graph embeddings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.