Abstract

Multi-document summarization is one of the most important tasks in the field of Natural Language Processing (NLP) and it gains increasing attention in recent years. It aims to generate one summary across several topic-related documents. Compared with extractive summarization, abstractive summarization is more similar to human-written ones. Proposing effective and efficient abstractive multi-document summarization models is significant to the NLP community. Existing deep learning based multi-document summarization models rely on the exceptional ability of neural networks to extract distinct features. However, they have missed out important linguistic knowledge such as dependencies between words since linguistics information in texts is full of meaningful knowledge with respect to the input documents. Besides, how models automatically evaluate the quality of the summary is crucial to design a high-performance summarization model since the evaluation indicator objectively measures the effectiveness of a method. In this proposal, we bring forward two research questions and corresponding solutions for the abstractive multi-document summarization task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call