Abstract

The automatic evaluation of summaries is a hard task that continues to be open. The assessment aims to measure simultaneously the informativeness and readability of summaries. The scientific community has tackled this problem with partial solutions, in terms of informativeness, using ROUGE. However, to use this method, it is necessary to have multiple summaries made by humans (the references). Methods without human references have been implemented, but there are still far from being highly correlated to manual evaluations. In this paper we present SummTriver, an automatic evaluation method that tries to be more correlated to manual evaluation by using multiple divergences. The results are promising, especially for summarization campaigns. Besides this, we also present an interesting analysis, at micro-level, of how correlated the manual and automatic summaries evaluation methods are, when we make use of a large quantity of observations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call