Abstract

We present a new summarisation task, taking scientific articles and producing journal table-of-contents entries in the chemistry domain. These are one- or two-sentence author-written summaries that present the key findings of a paper. This is a first look at this summarisation task with an open access publication corpus consisting of titles and abstracts, as input texts, and short author-written advertising blurbs, as the ground truth. We introduce the dataset and evaluate it with state-of-the-art summarisation methods.

Highlights

  • Table-of-contents (TOC) entries are short summaries written by authors that are placed in the table of contents of journals, often with an eyecatching accompanying image, to advertise their paper to readers

  • We can observe from the confidence interval (CI) overlap that SciBERTA is significantly better than the other two deep learning methods, whereas Lead-2 provides the most competitive baseline

  • An examination of the summaries produced on the validation set confirms that this method most often adopts the Lead-2 strategy by copying first and/or second sentence, an issue that was observed in prior work (Qiu et al, 2020; Gehrmann et al, 2018). This result pattern is confirmed by prior work (Tab. 5) where we see that the original pointer–generator network (PGN) does not even beat the Lead-3 baseline, while the BERT-based model outperforms the other two

Read more

Summary

Introduction

Table-of-contents (TOC) entries are short summaries written by authors that are placed in the table of contents of journals, often with an eyecatching accompanying image, to advertise their paper to readers. We take the titles and abstracts from chemistry papers published by the Royal Society of Chemistry as input in this initial study, as they are freely available and more numerous, but will release full text for the smaller subset of open access publications. As such, this particular corpus is different from other scientific corpora, which normally take the abstract as the summary. We perform a human evaluation study to compare the models and to validate the usefulness of the summarisation task

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.