Abstract
Knowledge graphs represented as RDF datasets are integral to many machine learning applications. RDF is supported by a rich ecosystem of data management systems and tools, most notably RDF database systems that provide a SPARQL query interface. Surprisingly, machine learning tools for knowledge graphs do not use SPARQL, despite the obvious advantages of using a database system. This is due to the mismatch between SPARQL and machine learning tools in terms of data model and programming style. Machine learning tools work on data in tabular format and process it using an imperative programming style, while SPARQL is declarative and has as its basic operation matching graph patterns to RDF triples. We posit that a good interface to knowledge graphs from a machine learning software stack should use an imperative, navigational programming paradigm based on graph traversal rather than the SPARQL query paradigm based on graph patterns. In this paper, we present RDFFrames, a framework that provides such an interface. RDFFrames provides an imperative Python API that gets internally translated to SPARQL, and it is integrated with the PyData machine learning software stack. RDFFrames enables the user to make a sequence of Python calls to define the data to be extracted from a knowledge graph stored in an RDF database system, and it translates these calls into a compact SPQARL query, executes it on the database system, and returns the results in a standard tabular format. Thus, RDFFrames is a useful tool for data preparation that combines the usability of PyData with the flexibility and performance of RDF database systems.
Highlights
There has recently been a sharp growth in the number of knowledge graph datasets that are made available in the RDF (Resource Description Framework)1 data model
The remaining time is spent on issuing the query to the engine and retrieving the results. This is typical in all our experiments: RDFFrames needs a few milliseconds to generate the SPARQL query and the remaining time is spent on query processing
The query produced by naive query generation did not finish in one hour and we terminated it after this time, which demonstrates the need for RDFFrames to generate optimized SPARQL and not rely exclusively on the query optimizer
Summary
There has recently been a sharp growth in the number of knowledge graph datasets that are made available in the RDF (Resource Description Framework) data model. This ecosystem includes standard serialization formats, parsing and processing libraries, and most notably RDF database management systems (a.k.a. RDF engines or triple stores) that support SPARQL, the W3C standard query language for RDF data. RDF engines or triple stores) that support SPARQL, the W3C standard query language for RDF data Examples of these systems include OpenLink Virtuoso, Apache Jena, and managed services such as Amazon Neptune.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.