Abstract

During the years, it has been possible to assess significant improvements in the computational efficiency of Semantic Web search and exploration systems. However, it has been much harder to assess how well different semantic systems’ user interfaces help their users. One of the key factors facilitating the advancement of research in a particular field is the ability to compare the performance of different approaches. Though there are many such benchmarks in Semantic Web fields that have experienced significant improvements, this is not the case for Semantic Web user interfaces for data exploration. We propose and demonstrate the use of a benchmark for evaluating such user interfaces, which includes a set of typical user tasks and a well-defined procedure for assigning a measure of performance on those tasks to a semantic system. We have applied the benchmark to four such systems. Moreover, all the required resources to apply the benchmark are openly available online. We intend to initiate a community conversation that will lead to a generally accepted framework for comparing systems and for measuring, and thus encouraging, progress towards better semantic search and exploration tools.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call