Abstract

BackgroundThe use of knowledge models facilitates information retrieval, knowledge base development, and therefore supports new knowledge discovery that ultimately enables decision support applications. Most existing works have employed machine learning techniques to construct a knowledge base. However, they often suffer from low precision in extracting entity and relationships. In this paper, we described a data-driven sublanguage pattern mining method that can be used to create a knowledge model. We combined natural language processing (NLP) and semantic network analysis in our model generation pipeline.MethodsAs a use case of our pipeline, we utilized data from an open source imaging case repository, Radiopaedia.org, to generate a knowledge model that represents the contents of medical imaging reports. We extracted entities and relationships using the Stanford part-of-speech parser and the “Subject:Relationship:Object” syntactic data schema. The identified noun phrases were tagged with the Unified Medical Language System (UMLS) semantic types. An evaluation was done on a dataset comprised of 83 image notes from four data sources.ResultsA semantic type network was built based on the co-occurrence of 135 UMLS semantic types in 23,410 medical image reports. By regrouping the semantic types and generalizing the semantic network, we created a knowledge model that contains 14 semantic categories. Our knowledge model was able to cover 98% of the content in the evaluation corpus and revealed 97% of the relationships. Machine annotation achieved a precision of 87%, recall of 79%, and F-score of 82%.ConclusionThe results indicated that our pipeline was able to produce a comprehensive content-based knowledge model that could represent context from various sources in the same domain.

Highlights

  • The use of knowledge models facilitates information retrieval, knowledge base development, and supports new knowledge discovery that enables decision support applications

  • Knowledge model generation To reveal the sublanguage pattern, we summarized the semantic types occurring in the corpus and visualized entity relationships using a co-occurrence-based semantic network

  • The results of using 135 Unified Medical Language System (UMLS) semantic types for semantic annotation demonstrated that the majority (80.32%) of the radiology cases in the corpus covered by the top 22 (16.3%) UMLS semantic types (Fig. 3)

Read more

Summary

Introduction

The use of knowledge models facilitates information retrieval, knowledge base development, and supports new knowledge discovery that enables decision support applications. Most existing works have employed machine learning techniques to construct a knowledge base. They often suffer from low precision in extracting entity and relationships. Coden et al proposed a Cancer Disease Knowledge Representation Model (CDKRM), which was able to automatically extract information from free-text pathology reports [3] by incorporating Natural Language Processing (NLP), machine learning, and domain-specific rules. Yetisgen-Yildiz et al [4, 5] developed a pipeline to automatically extract semantic components from radiology reports They first constructed a knowledge model (with an ontology of 11 section categories) of radiology reports sections to identify section boundaries using rule-based approach. The proposed conceptual model achieved performance improvement in all cases with F-score of 0.98, 1.00, and 0.80 for pulmonary embolism identification, deep-vein thrombosis, and incidental clinically relevant findings, respectively

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call