Abstract

The problem of natural language processing over structured data has become a growing research field, both within the relational database and the Semantic Web community, with significant efforts involved in question answering over knowledge graphs (KGQA). However, many of these approaches are either specifically targeted at open-domain question answering using DBpedia, or require large training datasets to translate a natural language question to SPARQL in order to query the knowledge graph. Hence, these approaches often cannot be applied directly to complex scientific datasets where no prior training data is available. In this paper, we focus on the challenges of natural language processing over knowledge graphs of scientific datasets. In particular, we introduce Bio-SODA, a natural language processing engine that does not require training data in the form of question-answer pairs for generating SPARQL queries. Bio-SODA uses a generic graph-based approach for translating user questions to a ranked list of SPARQL candidate queries. Furthermore, Bio-SODA uses a novel ranking algorithm that includes node centrality as a measure of relevance for selecting the best SPARQL candidate query. Our experiments with real-world datasets across several scientific domains, including the official bioinformatics Question Answering over Linked Data (QALD) challenge, as well as the CORDIS dataset of European projects, show that Bio-SODA outperforms publicly available KGQA systems by an F1-score of least 20% and by an even higher factor on more complex bioinformatics datasets.

Highlights

  • The problem of natural language processing over structured data has gained significant traction, both in the Semantic Web community – with a focus on answering natural language questions over RDF graph databases [10, 42, 45] – and in the relational database community, where the goal is to answer questions by finding their semantically equivalent translations to SQL [7, 22, 23, 35]

  • In this paper we have introduced Bio-SODA, a question answering system for domain knowledge graphs, which we evaluated across three real-world datasets pertaining to different domains: biomedical, gene orthology and gene expression, and European Union (EU)-funded projects

  • Our results have shown that Bio-SODA outperforms stateof-the-art systems that are publicly available for testing by a 20% F1-score improvement and more

Read more

Summary

Introduction

The problem of natural language processing over structured data has gained significant traction, both in the Semantic Web community – with a focus on answering natural language questions over RDF graph databases [10, 42, 45] – and in the relational database community, where the goal is to answer questions by finding their semantically equivalent translations to SQL [7, 22, 23, 35]. Significant research efforts have been invested in particular in opendomain question answering over knowledge graphs These efforts often use the DBpedia and/or Wikidata knowledge bases, that are composed of structured content from various Wikimedia projects. A growing ecosystem of tools is becoming available for solving subtasks of the KGQA problem, such as entity linking [14, 27, 31, 36] or query generation [44]. Most of these tools are targeted at question answering over DBpedia [38], which casts doubts on their applicability to other contexts, such as for scientific datasets. Most of the existing datasets are synthetic (i.e., not based on real query logs) and generally limited to DBpedia or Wikidata, which may not be representative of knowledge graphs for scientific datasets

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call