Abstract

In today’s world, the rapid growth of textual data on internet sites & online resources makes it challenging for human being to assimilate essential information. To handle such issues, text summarization (TS) plays an important role. Through the TS process, a shorter version of the original content is generated to preserve the relevant information. This study suggests a quantitative assessment of models for single and multi-document summarization based on the sentence scoring method. Experimentation of the models has been carried out on DUC datasets. A detailed comparative analysis of the models is reported with respect to the performance of algorithms based on various metrics such as Recall Oriented-Understudy for Gisting Evaluation (ROUGE), Range, Co-efficient of Variation (CV) and Readability score.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call