Abstract

Text summarization research is significant and challenging in the domain of natural language processing. Abstractive text summarization mainly uses the encoder-decoder framework, wherein the encoder component does not have a sufficient semantic comprehension of the input text, and there are exposure biases and semantic inconsistencies between the reference and generated summaries during the training process. We propose an improved encoder-decoder model that incorporates a hierarchical attention mechanism and multiobjective reinforcement learning. The encoder introduces a multihead self-attention mechanism to allow for the acquisition of more comprehensive semantic information from multiple angles and dimensions, while the decoder introduces a pointer-generator network to solve the out-of-vocabulary problem. Multiobjective reinforcement learning methods are constructed throughout the training process to optimize the model in terms of addressing exposure bias, maintaining semantic consistency, and enhancing readability. The results of the comparative experiments demonstrate that the proposed model significantly improved in terms of the ROUGE evaluation metric, and the generated summaries were semantically similar to the reference summaries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call