Abstract

The paper focuses on the importance of coherence and preserving the breadth of content in summaries generated by the extractive text summarization method. The study utilized the dataset containing 16,772 pairs of extractive and corresponding abstractive summaries of scientific papers specifically tailored to increase text coherence. We smoothed the extractive summaries with a Large Language Model (LLM) fine-tuning approach and evaluated our results by applying the coefficient of variation approach. The statistical significance of the results was assessed using the Kolmogorov-Smirnov test and Z-test. We observed an increase in coherence in the predicted texts, highlighting the effectiveness of our proposed methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call