Abstract

Sequence-to-sequence architecture with attention mechanism is widely used in abstractive text summarization, and has achieved a series of remarkable results. However, this method may suffer from error accumulation. That is to say, at the testing stage, the input of decoder is the word generated at the previous time, so that decoder-side error will be continuously amplified. This paper proposes a Summarization model using a Bidirectional decoder (BiSum), in which the backward decoder provides a reference for the forward decoder. We use attention mechanism at both encoder and backward decoder sides to ensure that the summary generated by backward decoder can be understood. Also, pointer mechanism is added in both the backward decoder and the forward decoder to solve the out-of-vocabulary problem. We remove the word segmentation step in regular Chinese preprocessing, which greatly improves the quality of summary. Experimental results show that our work can produce higher-quality summary on Chinese datasets TTNews and English datasets CNN/Daily Mail.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call