Abstract

Automatically generate accurate summaries from legal public opinion news can help readers to grasp the main ideas of news quickly. Although many improved sequence-to-sequence models have been proposed for the abstractive text summarization task, these approaches confront two challenges when addressing domain-specific summarization task: (1) the appropriate selection of domain knowledge; (2) the effective manner of integrating domain knowledge into summarization model. In order to tackle the above challenges, this paper selects the pre-training topic information as the legal domain knowledge, which is then integrated into the sequence-to-sequence model to improve the performance of public opinion news summarization. Concretely, two kinds of topic information are utilized: first, the topic words which denote the main aspects of the source document are encoded to guide the decoding process. Furthermore, the predicted output is forced to have a similar topic probability distribution with the source document. We evaluate our model on a large dataset of legal public opinion news collected from micro-blog, and the experimental results show that the proposed model outperforms existing baseline systems under the rouge metrics. To the best of our knowledge, this work represents the first attempt in the legal public opinion domain for text summarization task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call