Abstract
The growing volume of scientific literature makes it difficult for researchers to identify the key contributions of a research paper. Automating this process would facilitate efficient understanding, faster literature surveys and comparisons. The automated process may help researchers to identify relevant and impactful information in less time and effort. In this article, we address the challenge of identifying the contributions in research articles. We propose a method that infuses factual knowledge from a scientific knowledge graph into a pre-trained model. We divide the knowledge graph into mutually exclusive subgroups and infuse the knowledge in the pre-trained model using adapters. We also construct a scientific knowledge graph consisting of 3,600 Natural Language Processing (NLP) papers to acquire factual knowledge. In addition, we annotate a new test set to evaluate the model’s ability to identify sentences that make significant contributions to the papers. Our model achieves the best performance in comparison to previous methods with a relative improvement of 40.06% and 25.28% in terms of F1 score for identifying contributing sentences in the NLPContributionGraph (NCG) test set and the newly annotated test set, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.