Abstract
Sequence to sequence (Seq2Seq) model for abstractive summarization have aroused widely attention due to their powerful ability to represent sequence. However, the sequence structured data is a simple format, which cannot describe the complexity of graphs and may lead to ambiguous, and hurt the performance of summarization. In this paper, we propose a Gated Graph Neural Attention Networks (GGNANs) for abstractive summarization. The proposed GGNANs unified graph neural network and the celebrated Seq2seq for better encoding the full graph-structured information. We propose a graph transform method based on PMI, self-connection, forward-connection and backward-connection to better combine graph-structured information and the sequence-structured information. Extensive experimental results on the LCSTS and Gigaword show that our proposed model outperforms most of strong baseline models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.