Abstract

Most of the news headline generation models that use the sequence-to-sequence model or recurrent network have two shortcomings: the lack of parallel ability of the model and easily repeated generation of words. It is difficult to select the important words in news and reproduce these expressions, resulting in the headline that inaccurately summarizes the news. In this work, we propose a TD-NHG model, which stands for news headline generation based on an improved decoder from the transformer. The TD-NHG uses masked multi-head self-attention to learn the feature information of different representation subspaces of news texts and uses decoding selection strategy of top-k, top-p, and punishment mechanisms (repetition-penalty) in the decoding stage. We conducted a comparative experiment on the LCSTS dataset and CSTS dataset. Rouge-1, Rouge-2, and Rouge-L on the LCSTS dataset and CSTS dataset are 31.28/38.73, 12.68/24.97, and 28.31/37.47, respectively. The experimental results demonstrate that the proposed method can improve the accuracy and diversity of news headlines.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.