Abstract

The exponential growth of scientific literature yields the need to support users to both effectively and efficiently analyze and understand the some body of research work. This exploratory process can be facilitated by providing graphical abstracts–a visual summary of a scientific publication. Accordingly, previous work recently presented an initial study on automatic identification of a central figure in a scientific publication, to be used as the publication’s visual summary. This study, however, have been limited only to a single (biomedical) domain. This is primarily because the current state-of-the-art relies on supervised machine learning, typically relying on the existence of large amounts of labeled data: the only existing annotated data set until now covered only the biomedical publications. In this work, we build a novel benchmark data set for visual summary identification from scientific publications, which consists of papers presented at conferences from several areas of computer science. We couple this contribution with a new self-supervised learning approach to learn a heuristic matching of in-text references to figures with figure captions. Our self-supervised pre-training, executed on a large unlabeled collection of publications, attenuates the need for large annotated data sets for visual summary identification and facilitates domain transfer for this task. We evaluate our self-supervised pretraining for visual summary identification on both the existing biomedical and our newly presented computer science data set. The experimental results suggest that the proposed method is able to outperform the previous state-of-the-art without any task-specific annotations.

Highlights

  • Finding, analyzing, and understanding scientific literature is an essential step in every research process, and one that is becoming ever-more time-consuming with the exponential growth of scientific publications (Bornmann and Mutz, 2015)

  • Whereas Yang et al (2019) report that using only the first figures degrades the performance in central figure identification on the PubMed data set, our findings indicate that exploiting the bias caused by the order of the figures may be beneficial in other domains, e.g., Computer Science (CS)

  • We investigated the problem of central figure identification, the task to identify candidate figures that can serve as visual summaries of their scientific publication, referred to as Graphical Abstract (GA)

Read more

Summary

INTRODUCTION

Finding, analyzing, and understanding scientific literature is an essential step in every research process, and one that is becoming ever-more time-consuming with the exponential growth of scientific publications (Bornmann and Mutz, 2015). To allow for the use of GAs in large-scale scenarios, Yang et al (2019) proposed the novel task of identifying a central figure from scientific publications, i.e., selecting the best candidate figure that can serve as GA In their work, they asked authors of publications uploaded to PubMed to select the most appropriate figure among all figures in their paper as the central figure. We consider pairs of abstracts and figure captions as the model’s input to predict whether the content of the figure matches the article’s abstract (i.e., the overview of the article) This stands in contrast to sentence matching We provide a comparison of central figure identification across training data from different domains

RELATED WORK
ANNOTATION STUDY
METHODOLOGY
Implementation Details
Dataset
Experimental Result
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.