Abstract

This paper studies how to automatically generate a natural language text that describes the facts in knowledge graph (KG). Considering the few-shot setting, we leverage the excellent capacities of pretrained language models (PLMs) in language understanding and generation. We make three major technical contributions, namely representation alignment for bridging the semantic gap between KG encodings and PLMs, relation-biased KG linearization for deriving better input representations, and multi-task learning for learning the correspondence between KG and text. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our model on KG-to-text generation task. In particular, our model outperforms all comparison methods on both fully-supervised and few-shot settings. Our code and datasets are available at this https URL.

Highlights

  • Iron Man is a fictional superhero who wears a suit of armor

  • We make three major technical contributions, namely representation alignment for bridging the semantic gap between knowledge graph (KG) encodings and pretrained language models (PLMs), relation-biased KG linearization for deriving better input representations, and multi-task learning for learning the correspondence between KG and text

  • Extensive experiments on three benchmark datasets demonstrate the effectiveness of our few-shot KG-to-text generation model

Read more

Summary

Introduction

Iron Man is a fictional superhero who wears a suit of armor He was created by writer Stan Lee, and designed by artists Jack Kirby. Extensive experiments on three benchmark datasets have demonstrated the effectiveness of our model on KG-to-text generation task. Based on these datasets, data-driven models have shown impressive capabilities to produce informative and fluent text for a given KG (Logan et al, 2019; Moryossef et al, 2019). We propose to study the task of few-shot KG-to-text generation that aims to produce satisfactory output text given only a handful of (several hundred) labelled instances. To understand the structured information in KG, the task of KG-to-text generation has been proposed to automatically generate a descriptive text for a given knowledge graph (Koncel-Kedziorski et al, 2019; Ribeiro et al, 2020a). With the help of crowdsourcing platforms and information extraction (IE) systems, large-scale labelled pairs of KG and its descriptive text have been created, such as WikiBio (Lebret et al, 2016) and WebNLG Challenge

Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call