Abstract

Abstractive text summarization aims to generate a brief version of a given sentence while attempting to express its main meaning. Although some models based on the sequence-to-sequence framework have achieved remarkable results recently, there are still many problems that cannot be ignored. In this paper, we propose a selective reinforced sequence-to-sequence (i.e. Seq2Seq) attention model for abstractive social media text summarization. We add a selective gate after the encoder module for better filtering out invalid information. Specifically, we combine the cross-entropy and reinforcement learning policy for optimizing the ROUGE score directly. Evaluations on a famous social media dataset (i.e. LCSTS) demonstrate that our model outperforms most of famous baseline models, and the proposed model is 2.6%, 2.1% and 2.5% higher than the basic Seq2Seq attention model on the F1 score of ROUGE-1, ROUGE-2, and ROUGE-L, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.