Abstract

Extractive summarization consists of generating a summary by ranking sentences from the original texts according to their importance and salience. Text representation is a fundamental process that affects the effectiveness of many text summarization methods. Distributed word vector representations have been shown to improve Natural Language Processing (NLP) tasks, especially Automatic Text Summarization (ATS). However, most of them do not consider the order and the context of the words in a sentence. This does not fully allow grasping the sentence semantics and the syntactic relationships between sentences constituents. In this paper, to overcome this problem, we propose a deep neural network model based-method for extractive single document summarization using the state-of-the-art sentence embedding models. Experiments are performed on the standard DUC2002 dataset using three sentence embedding models. The obtained results show the effectiveness of the used sentence embedding models for ATS. The overall comparison results show that our method outperforms eight well-known ATS baselines and achieves comparable results to the state-of-the-art deep learning based methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call