Abstract

We demonstrate the viability of knowledge transfer between two related tasks: machine reading comprehension (MRC) and query-based text summarization. Using an MRC model trained on the SQuAD1.1 dataset as a core system component, we first build an extractive query-based summarizer. For better precision, this summarizer also compresses the output of the MRC model using a novel sentence compression technique. We further leverage pre-trained machine translation systems to abstract our extracted summaries. Our models achieve state-of-the-art results on the publicly available CNN/Daily Mail and Debatepedia datasets, and can serve as simple yet powerful baselines for future systems. We also hope that these results will encourage research on transfer learning from large MRC corpora to query-based summarization.

Highlights

  • Query-based single-document text summarization is the process of selecting the most relevant points in a document for a given query and arranging them into a concise and coherent snippet of text

  • We show how processing the output of an machine reading comprehension (MRC) system (trained on the SQuAD1.1 dataset (Rajpurkar et al, 2016)) with a simple rule-based sentence compression module that operates on the dependency parse of the answer sentence yields results that are better than those of query-based extractive summarizers trained for the specific dataset

  • We described an approach to extractive and abstractive summarization that relies on components designed for different tasks: MRC, sentence compression, and machine translation (MT)

Read more

Summary

Introduction

Query-based single-document text summarization is the process of selecting the most relevant points in a document for a given query and arranging them into a concise and coherent snippet of text. The query can range from an individual word to a fully formed natural language question. Extractive summarizers select verbatim the most relevant span of text in the source, while abstractive summarizers further paraphrase the selected content for better clarity and brevity. By and large, existing approaches train models using summarization data corpora (Nema et al, 2017; Hasselqvist et al, 2017), which are of moderate size. Large corpora are available for related tasks, machine reading comprehension (MRC) and machine translation (MT). To find out if such corpora have utility for summarizers, we propose methods to di-

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.