Abstract
In today’s world, the rapid growth of textual data on internet sites & online resources makes it challenging for human being to assimilate essential information. To handle such issues, text summarization (TS) plays an important role. Through the TS process, a shorter version of the original content is generated to preserve the relevant information. This study suggests a quantitative assessment of models for single and multi-document summarization based on the sentence scoring method. Experimentation of the models has been carried out on DUC datasets. A detailed comparative analysis of the models is reported with respect to the performance of algorithms based on various metrics such as Recall Oriented-Understudy for Gisting Evaluation (ROUGE), Range, Co-efficient of Variation (CV) and Readability score.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.