Abstract

The purpose of this study was to describe a new, broadly applicable radiology report categorization (RADCAT) system that was developed collaboratively between radiologists and emergency department (ED) physicians, and to establish its usability and performance by interobserver variation. In collaboration with our ED colleagues, we developed the RADCAT system for all imaging studies performed in our level-1 trauma center, including five categories that span the spectrum of normal through emergent life-threatening findings. During a pilot phase, four radiologists used the system real-time to categorize a minimum of 400 reports in the ED. From this pool of categorized studies, 58 reports were then selected semi-randomly, de-identified, stripped of their original categorization, and recategorized based on the narrative radiology report by 12 individual reviewers (6 radiologists, and 6 ED physicians). Interobserver variation between all reviewers, radiologists only, and ED physicians only was calculated using Cohen's Kappa statistic and Kendall's coefficient of concordance. Altogether, agreement among radiologists and ED physicians was substantial (κ=0.73, p<0.0001) and agreement for each category was substantial (all κ>0.60, p<0.0001). The lowest agreement was observed with RADCAT-3 (κ>0.61, p<0.0001) and the highest agreement with RADCAT-1 (κ>0.85, p<0.0001). A high trend in agreement was observed for radiologists and ED physicians and their combination (all W>0.90, p<0.0001). Our RADCAT system is understandable between radiologists and ED physicians for categorizing a wide range of imaging studies, and warrants further assessment and validation. Based upon these pilot results, we plan to adopt this RADCAT scheme and further assess its performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call