Abstract

AbstractDue to the exponential overflow of textual information in various fields of knowledge and on the internet, it is very challenging to extract important information or to generate a summary from some multi-document collection in a specific field. With such a gigantic amount of textual content, human text summarization becomes impractical since it is expensive and consumes a lot of time and effort. So, developing automatic text summarization (ATS) systems is becoming increasingly essential. ATS approaches are either extractive or abstractive. The extractive approach is simpler and faster than the abstractive approach. This work proposes an extractive ATS system that aims to extract a small subset of sentences from a large multi-document text. First, the whole text is preprocessed by applying some natural language processing techniques such as sentences segmentation, words tokenization, removal of stop-words, and stemming to provide a structured representation of the original document collection. Based on this structured representation, the ATS problem is formulated as a multi-objective optimization (MOO) problem that optimizes the extracted summary to maintain the coverage of the main text content while avoiding redundant information. Secondly, an evolutionary sparse multi-objective algorithm is developed to solve the formulated large-scale MOO. The output of this algorithm is a set of non-dominated summaries (Pareto front). A novel criterion is proposed to select the target summary from the Pareto front. The proposed ATS system has been examined using (DUC) datasets, and the output summaries have been evaluated using (ROUGE) metrics and compared with the literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call