Abstract

To evaluate interobsever variability of breast ultrasound categorization based on the Breast Imaging Reporting and Data system (BI-RADS) and modified by the Alliance for Breast Cancer Screening in Korea (ABCS-K). Ten breast radiologists with 3-16 years of experience participated in a quality control workshop in March 2016 for the Mammography and Ultrasonography Study for Breast Cancer Screening Effectiveness (MUST BE) trial. Two investigators selected 125 sets of breast lesions, those were pathologically proven to be malignant or clinically proven to be benign showing stability for at least 2-years, and prepared power point slides showing 2 representative orthogonal images per each lesion. After a brief lecture about the modified categorization, 10 radiologists independently categorized the lesions blinding to mammographic images and pathologic results. The interobserver variability were measured using kappa statistics. The overall and overall weighted kappa values for the modified categorization (2, 3, 4, and 5) were 0.52 and 0.72, respectively. The overall kappa value was 0.66 when dichotomizing the interpretation into benign (BI-RADS 1, 2, and 3) or suspicious (BI-RADS 4, 5). However, those for the subdividing category 4 (a, b, and c) were 0.37 and 0.56, respectively. The overall kappa value was increased up to 0.48 when dichotomizing the interpretation into low suspicion (BI-RADS 4a) or over moderate suspicion (BI-RADS 4b and 4c). In the modified categorization, ABCS-K radiologists showed moderate interobserver agreements for the decision of biopsy (category 2, 3 vs. 4, 5) and fair agreements for the subdividing category 4. More clear guidance for the modified categorization and further effort to improve interobserver agreements are needed for successful performance of MUST BE trial.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call