Abstract

When generating short text summaries., it is challenging to accurately learn the global semantic information of the original text and extract the correlation features between local semantic information., and at the same time lead to too much redundant information., making the generated summaries ineffective. In addition., the existing normalization algorithm will increase the computational complexity of the text summarization model., which affects the performance of the model. Aiming at the above problems., a text summarization generation model GMELC(Generation Model for Enhancing Local Correlation) is proposed to enhance local correlation in generated summaries. First., the residual concept used in other media feature extraction networks is introduced into the text summarization model. We add the word semantic feature as a residual block to the n-gram feature., which improves the dependencies of words in phrases and strengthens the correlation between phrases and words in sentences. Secondly., we propose a scaled I <inf xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</inf> normalization method to normalize the data for reducing the amount of training parameters and removing the unnecessary computation caused by variance., so that the computational complexity of the model is reduced., thereby improving the computational efficiency and performance of the model. In order to verify the role of the model in enhancing the correlation between Chinese characters and words., experiments were conducted on the Chinese dataset LCSTS., the result shows that the summaries generated by GMELC have higher recall and better readability than other state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call