Abstract

We consider extractive summarization within a cluster of related texts (multi-document summarization). Unlike single-document summarization, redundancy is particularly important because sentences across related documents might convey overlapping information. Thus, sentence extraction in such a setting is difficult because one will need to determine which pieces of information are relevant while avoiding unnecessary repetitiveness. To solve this difficult problem, we propose a novel reinforcement learning based method <inline-formula><tex-math notation="LaTeX">$\mathbf{PoBRL}$</tex-math></inline-formula> (<b>Po</b>licy <b>B</b>lending with maximal marginal relevance and <b>R</b>einforcement <b>L</b>earning) for solving multi-document summarization. PoBRL jointly optimizes over the following objectives necessary for a high-quality summary: importance, relevance, and length. Our strategy decouples this multi-objective optimization into different sub-problems that can be solved individually by reinforcement learning. Utilizing PoBRL, we then blend each learned policies to produce a summary that is a concise and a complete representation of the original input. Our empirical analysis shows high performance on several multi-document datasets. Human evaluation also shows that our method produces high-quality output.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call