BackgroundStructured reporting in cardiac imaging is strongly encouraged to improve quality through consistency. The Coronary Artery Disease - Reporting and Data System (CAD-RADS) was recently introduced to facilitate interdisciplinary communication of coronary CT angiography (CTA) results. We aimed to assess the agreement between manual and automated CAD-RADS classification using a structured reporting platform. MethodsFive readers prospectively interpreted 500 coronary CT angiographies using a structured reporting platform that automatically calculates the CAD-RADS score based on stenosis and plaque parameters manually entered by the reader. In addition, all readers manually assessed CAD-RADS blinded to the automatically derived results, which was used as the reference standard. We evaluated factors influencing reader performance including CAD-RADS training, clinical load, time of the day and level of expertise. ResultsTotal agreement between manual and automated classification was 80.2%. Agreement in stenosis categories was 86.7%, whereas the agreement in modifiers was 95.8% for “N”, 96.8% for “S”, 95.6% for “V” and 99.4% for “G”. Agreement for V improved after CAD-RADS training (p = 0.047). Time of the day and clinical load did not influence reader performance (p > 0.05 both). Less experienced readers had a higher total agreement as compared to more experienced readers (87.0% vs 78.0%, respectively; p = 0.011). ConclusionsEven though automated CAD-RADS classification uses data filled in by the readers, it outperforms manual classification by preventing human errors. Structured reporting platforms with automated calculation of the CAD-RADS score might improve data quality and support standardization of clinical decision making.
Read full abstract