Abstract

Headline generation aims to condense key information from an article or a document into a concise one-sentence summary. The Transformer structure is in general effective for such tasks, yet it suffers from a dramatic increase in training time and GPU consumption as the input text length grows. To address this problem, a hybrid attention mechanism is proposed. Both local and global semantic information among words are modeled in a way that significantly improves training efficiency, especially for long text. Effectiveness is not sacrificed; in fact, fluency and semantic coherence of the generated headlines are enhanced. Experimental results on an open benchmark dataset show that, compared to the baseline model’s best performance, the proposed model obtains a 14.7%, 16.7%, 14.4% and 9.1% increase in the F1 values of the ROUGE-1, the ROUGE-2, the ROUGE-L and the ROUGE-WE metrics, respectively. The semantic coherence of the generated text is also improved, as shown by a 2.8% improvement in the BERTScore’s F1 value. These results show that the effectiveness of the proposed headline generation model with the hybrid attention mechanism is also improved. The hybrid attention mechanism could provide references for relevant text generation tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.