Abstract
SNOMED CT provides about 300,000 codes with fine-grained concept definitions to support interoperability of health data. Coding clinical texts with medical terminologies it is not a trivial task and is prone to disagreements between coders. We conducted a qualitative analysis to identify sources of disagreements on an annotation experiment which used a subset of SNOMED CT with some restrictions. A corpus of 20 English clinical text fragments from diverse origins and languages was annotated independently by two domain medically trained annotators following a specific annotation guideline. By following this guideline, the annotators had to assign sets of SNOMED CT codes to noun phrases, together with concept and term coverage ratings. Then, the annotations were manually examined against a reference standard to determine sources of disagreements. Five categories were identified. In our results, the most frequent cause of inter-annotator disagreement was related to human issues. In several cases disagreements revealed gaps in the annotation guidelines and lack of training of annotators. The reminder issues can be influenced by some SNOMED CT features.
Highlights
Standardised terminologies are important resources for interoperability in electronic health records (EHRs) [1,2]
In the Materials and Methods section, we describe the main elements to understand the scope of the ASSESS CT annotation experiment, i.e. the acquisition of the corpus, the recruitment of the annotators and the annotation guideline
The code coincidence between annotators with the reference standard was astonishingly low: 21.6% of the chunks the annotation groups include the exact same SNOMED CT codes among the two annotators and the reference standard
Summary
Standardised terminologies are important resources for interoperability in electronic health records (EHRs) [1,2]. Their use supports hospital reimbursement, audit, research, benchmarking and outcome management [3]. Quality and accuracy of clinical coding is of utmost importance. Several factors affect the quality of clinical coding. Santos et al [5] identified some barriers such as poor documentation practices in patient medical records and organisational factors, including unrealistic coding deadlines and inadequate training, among others.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.