Abstract

The Transformer-based summarization models rely solely on the attention mechanism for document encoding, making it difficult to accurately capture long-range dependencies in long documents due to the presence of attention redundancy. To address this issue, we propose an extractive summarization framework guided by a topic model (TopicSum) that utilizes a heterogeneous graph neural network to leverage the topic information as document-level features during the sentence selection process, thereby capturing the long-range dependencies among sentences. The sentence-level features in this topic model align with the basic unit of the extractive summarization task. Additionally, a memory mechanism is employed to dynamically store and update the memory module, reducing the potential of repetitive information guiding sentence selection. We evaluated the model on three large document datasets, namely Pubmed, arXiv, and GovReport, and achieved significantly higher Rouge scores than previous works, including extractive and abstractive models. Furthermore, our experiments demonstrate that recent highly regarded large language models such as ChatGPT are insufficient to handle the long document summarization task directly. The proposed approach in this paper exhibits sufficient competitiveness in terms of both generation quality and deployment conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call