Abstract

In recent years, a significant number of question answering (QA) systems that retrieve answers to natural language questions from knowledge graphs (KG) have been introduced. However, finding a benchmark that accurately evaluates the quality of a question answering system is a difficult task because of (1) the high degree of variations with respect to the fine-grained properties among the available benchmarks, (2) the static nature of the available benchmarks versus the evolving nature of KGs, and (3) the limited number of KGs targeted by existing benchmarks, which hinders the usability of QA systems in real deployment over KGs that are different from those which the QA system was evaluated using. In this demonstration, we introduce SmartBench, an automatic benchmark generating system for QA over any KG. The benchmark generated by SmartBench is guaranteed to cover all the properties of the natural language questions and queries that were encountered in the literature as long as the targeted KG includes these properties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call