Abstract

Summarization is the task that creates a summary with the major points of the original document. Deep learning plays an important role in both abstractive and extractive summary generations. While a number of models show that combining the two gives good results, this paper focuses on a pure abstractive method to generate good summaries. Our model is a stacked RNN network with a monotonic alignment mechanism. Monotonic alignment has an advantage because it produces the context that is in the same sequence as the original document, at the same time eliminating repeating sequences. To obtain monotonic alignment, this paper proposes two energies that are calculated using only the previous alignment state. We use sub-word method to reduce the rate of producing OOVs(Out of Vocabulary). The dropout is used for generalization and the residual connection to overcome gradient vanishing. We experiment on CNN/daily new and Reddits dataset. Our method out-performs the previous models with monotonic alignment by 4 ROUGE-1 points and achieves the results comparable to state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call