Abstract

The growing volume of scientific literature makes it difficult for researchers to identify the key contributions of a research paper. Automating this process would facilitate efficient understanding, faster literature surveys and comparisons. The automated process may help researchers to identify relevant and impactful information in less time and effort. In this article, we address the challenge of identifying the contributions in research articles. We propose a method that infuses factual knowledge from a scientific knowledge graph into a pre-trained model. We divide the knowledge graph into mutually exclusive subgroups and infuse the knowledge in the pre-trained model using adapters. We also construct a scientific knowledge graph consisting of 3,600 Natural Language Processing (NLP) papers to acquire factual knowledge. In addition, we annotate a new test set to evaluate the model’s ability to identify sentences that make significant contributions to the papers. Our model achieves the best performance in comparison to previous methods with a relative improvement of 40.06% and 25.28% in terms of F1 score for identifying contributing sentences in the NLPContributionGraph (NCG) test set and the newly annotated test set, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call