Abstract

BackgroundThe increasing adoption of ontologies in biomedical research and the growing number of ontologies available have made it necessary to assure the quality of these resources. Most of the well-established ontologies, such as the Gene Ontology or SNOMED CT, have their own quality assurance processes. These have demonstrated their usefulness for the maintenance of the resources but are unable to detect all of the modelling flaws in the ontologies. Consequently, the development of efficient and effective quality assurance methods is needed.MethodsHere, we propose a series of quantitative metrics based on the processing of the lexical regularities existing in the content of the ontology, to analyse readability and structural accuracy. The readability metrics account for the ratio of labels, descriptions, and synonyms associated with the ontology entities. The structural accuracy metrics evaluate how two ontology modelling best practices are followed: (1) lexically suggest locally define (LSLD), that is, if what is expressed in natural language for humans is available as logical axioms for machines; and (2) systematic naming, which accounts for the amount of label content of the classes in a given taxonomy shared.ResultsWe applied the metrics to different versions of SNOMED CT. Both readability and structural accuracy metrics remained stable in time but could capture some changes in the modelling decisions in SNOMED CT. The value of the LSLD metric increased from 0.27 to 0.31, and the value of the systematic naming metric was around 0.17. We analysed the readability and structural accuracy in the SNOMED CT July 2019 release. The results showed that the fulfilment of the structural accuracy criteria varied among the SNOMED CT hierarchies. The value of the metrics for the hierarchies was in the range of 0–0.92 (LSLD) and 0.08–1 (systematic naming). We also identified the cases that did not meet the best practices.ConclusionsWe generated useful information about the engineering of the ontology, making the following contributions: (1) a set of readability metrics, (2) the use of lexical regularities to define structural accuracy metrics, and (3) the generation of quality assurance information for SNOMED CT.

Highlights

  • The increasing adoption of ontologies in biomedical research and the growing number of ontologies available have made it necessary to assure the quality of these resources

  • We propose two metrics related to the structural accuracy: (1) systematic naming, which measures the lexical similarity between the labels of classes and their descendant classes; and (2) lexically suggest logically define, which measures how aligned the label and the axioms of a class are, as this principle implies that what is expressed in the natural language for humans should be available for the machines as axioms

  • This section describes the metrics that we developed for measuring readability and structural accuracy, as well as the method we propose to evaluate their application to the quality assurance of a given ontology

Read more

Summary

Introduction

The increasing adoption of ontologies in biomedical research and the growing number of ontologies available have made it necessary to assure the quality of these resources. Most of the well-established ontologies, such as the Gene Ontology or SNOMED CT, have their own quality assurance processes. These have demonstrated their usefulness for the maintenance of the resources but are unable to detect all of the modelling flaws in the ontolo‐ gies. The quality assurance process ensures that the design requirements are met This process must include methods for identifying flaws and, ideally, for proposing corrective actions. The metrics ‘lack of cohesion in methods’ [18], ‘tangledness’ [15], ‘semantic variance’ [19], or those related to ‘ontology richness’ defined in [20, 21] use the semantic information stored in the ontology for quantifying structural aspects. Other examples are the metrics defined in [22, 23], which use an extra corpora of domain-related documents for measuring the coverage of ontologies in a specialised domain, or metrics based on semiotics [24]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call