Abstract

In recent years, a significant number of question answering (QA) systems that retrieve answers to natural language questions from knowledge graphs (KG) have been introduced. However, finding a benchmark that accurately evaluates the quality of a question answering system is a difficult task because of (1) the high degree of variations with respect to the fine-grained properties among the available benchmarks, (2) the static nature of the available benchmarks versus the evolving nature of KGs, and (3) the limited number of KGs targeted by existing benchmarks, which hinders the usability of QA systems in real deployment over KGs that are different from those which the QA system was evaluated using. In this demonstration, we introduce SmartBench, an automatic benchmark generating system for QA over any KG. The benchmark generated by SmartBench is guaranteed to cover all the properties of the natural language questions and queries that were encountered in the literature as long as the targeted KG includes these properties.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.