Abstract

Extractive summarization consists of generating a summary by ranking sentences from the original texts according to their importance and salience. Text representation is a fundamental process that affects the effectiveness of many text summarization methods. Distributed word vector representations have been shown to improve Natural Language Processing (NLP) tasks, especially Automatic Text Summarization (ATS). However, most of them do not consider the order and the context of the words in a sentence. This does not fully allow grasping the sentence semantics and the syntactic relationships between sentences constituents. In this paper, to overcome this problem, we propose a deep neural network model based-method for extractive single document summarization using the state-of-the-art sentence embedding models. Experiments are performed on the standard DUC2002 dataset using three sentence embedding models. The obtained results show the effectiveness of the used sentence embedding models for ATS. The overall comparison results show that our method outperforms eight well-known ATS baselines and achieves comparable results to the state-of-the-art deep learning based methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.