Abstract

IntroductionAdministrative health data from emergency departments play important roles in understanding health needs of the public and reasons for health care resource use. International Classification of Disease (ICD) diagnostic codes have been widely used to code reasons of clinical encounters for administrative purposes in emergency departments.ObjectiveThe purpose of the study is to examine the coding agreement and reliability of ICD diagnosis codes in emergency department records through auditing the routinely collected data. MethodsWe randomly sampled 1 percent of records (n=1636) between October and December 2013 from 11 emergency departments in Alberta, Canada. Auditors were employed to review the same chart and independently assign main diagnosis codes. We assessed coding agreement and reliability through comparison of codes assigned by auditors and hospital coders using proportion of agreement and Cohen’s kappa. Error analysis was conducted to review diagnosis codes with disagreement and categorized them into six groups. ResultsOverall, the agreement was 86.5% and 82.2% at 3 and 4 digits levels respectively, and reliability was 0.86 and 0.82 respectively. Variations of agreement and reliability were identified across different emergency departments. The major two categories of coding discrepancy were the use of different codes for same condition (23.6%) and the use of codes at different levels of specificity (20.9%). ConclusionsDiagnosis codes in emergency departments show high agreement and reliability, although there are variations of coding quality across different hospitals. Stricter coding guidelines regarding the use of unspecified codes are needed to enhance coding consistency.

Highlights

  • Administrative health data from emergency departments play important roles in understanding health needs of the public and reasons for health care resource use

  • Following national guidelines developed by Canadian Institute of Health Information (CIHI), clinical information from emergency visits are collected in Alberta

  • This study focused on the assessment of the main diagnosis code

Read more

Summary

Methods

We randomly sampled 1 percent of records (n=1636) between October and December 2013 from 11 emergency departments in Alberta, Canada. Auditors were employed to review the same chart and independently assign main diagnosis codes. We assessed coding agreement and reliability through comparison of codes assigned by auditors and hospital coders using proportion of agreement and Cohen’s kappa. Error analysis was conducted to review diagnosis codes with disagreement and categorized them into six groups

Results
Conclusions
Introduction
Method
Discussion
Conclusion
Conflict of Interest Statement
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call