Abstract

Background:Classifying entities in domain ontologies under upper ontology classes is a recommended task in ontology engineering to facilitate semantic interoperability and modelling consistency. Integrating upper ontologies this way is difficult and, despite emerging automated methods, remains a largely manual task. Problem:Little is known about how well experts perform at upper ontology integration. To develop methodological and tool support, we first need to understand how well experts do this task. We designed a study to measure the performance of human experts at manually classifying classes in a general knowledge domain ontology with entities in the Basic Formal Ontology (BFO), an upper ontology used widely in the biomedical domain. Method:We recruited 8 BFO experts and asked them to classify 46 commonly known entities from the domain of travel with BFO entities. The tasks were delivered as part of a web survey. Results:We find that, even for a well understood general knowledge domain such as travel, the results of the manual classification tasks are highly inconsistent: the mean agreement of the participants with the classification decisions of an expert panel was only 51%, and the inter-rater agreement using Fleiss’ Kappa was merely moderate (0.52). We further follow up on the conjecture that the degree of classification consistency is correlated with the frequency the respective BFO classes are used in practice and find that this is only true to a moderate degree (0.52, Pearson). Conclusions:We conclude that manually classifying domain entities under upper ontology classes is indeed very difficult to do correctly. Given the importance of the task and the high degree of inconsistent classifications we encountered, we further conclude that it is necessary to improve the methodological framework surrounding the manual integration of domain and upper ontologies.

Highlights

  • Upper ontologies, sometimes called upper-level ontologies, conceptualise basic categories of entities common across all domains, such as physical entities and processes, and can serve as tools facilitating semantic interoperability and data integration ?

  • Even for a well understood general knowledge domain such as travel, the results of the manual classification tasks are highly inconsistent: the mean agreement of the participants with the classification decisions of an expert panel was only 51%, and the inter-rater agreement using Fleiss’ Kappa was merely moderate (0.52)

  • We conclude that manually classifying domain entities under upper ontology classes is very difficult to do correctly

Read more

Summary

Introduction

Sometimes called upper-level ontologies, conceptualise basic categories of entities common across all domains, such as physical entities and processes, and can serve as tools facilitating semantic interoperability and data integration ?. We investigate how well experts in the Basic Formal Ontology (BFO) ?, an upper ontology used widely in the biomedical domain, can manually classify domain entities under BFO classes This is, to our knowledge, the first systematic study to determine the performance of experts at upper ontology integration. Classifying entities in domain ontologies under upper ontology classes is a recommended task in ontology engineering to facilitate semantic interoperability and modelling consistency Integrating upper ontologies this way is difficult and, despite emerging automated methods, remains a largely manual task. We designed a study to measure the performance of human experts at manually classifying classes in a general knowledge domain ontology with entities in the Basic Formal Ontology (BFO), an upper ontology used widely in the biomedical domain

Objectives
Methods
Results
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call