Abstract

OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS) using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert) would be able to select a suitable KBS appropriate for his domain.

Highlights

  • Ontologies are extensively used in the scientific domains like Gene engineering and life critical systems

  • The semantic tools used for conducting the experimentation are Jena Application Program Interface (API), OpenRDFWorkbench, Protege, MySQL and SQL Server on Intel1 Core i5-4200M CPU @ 2.5 GHz with 6 GB RAM

  • OWL2 benchmarking for the evaluation of knowledge based systems contrast, the performance of relational database Knowledge Base Systems (KBS) (i.e. Sesame DB, Jena SDB, Ontrel and OWL2TRDB) for object properties characteristics pattern queries (OPQ) is very poor over large size datasets

Read more

Summary

Introduction

Ontologies are extensively used in the scientific domains like Gene engineering and life critical systems. The present work addresses the building blocks of a standard evaluation benchmark i.e. data schema, workload and performance metrics The details of these building blocks are provided . The methodology of the present work comprises of analysis of the existing benchmarks, construction of the data schema and workload (i.e. data generator and query set) for OWL2 semantics. The structural complexity of the data schema and the ontology semantics are important factors in the benchmark performance These factors provide basis for the proposed evaluation criteria, which comprises of two elements. The existing benchmarks evaluate KBSs on OWL semantics in data schema, dataset generator and workloads. OWL2 semantics are not covered by the benchmarks except the OntoBench [6] in data schema, which generates ontologies with already defined structure alongwith the options to select OWL and OWL 2 elements.

Evaluation benchmark
Results and discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call