Abstract

Objective Evaluate inter-observer variability in static breast sonogram final assessments among observers with different breast imaging experience using the first edition of the Breast Imaging Reporting and Data System (BI-RADS) for ultrasound. Methods Thirty women who each had one breast lesion and underwent a breast lesion resection operation between October 2007 and April 2008 were included in this study. Twelve radiologists independently reviewed two sonograms of each lesion and assigned a final BI-RADS assessment category. Inter-observer variability was measured using kappa statistics. Positive predictive values (PPVs) and negative predictive values (NPVs) for the final assessments were also calculated. Results For experienced observers, the kappa values of categories 3, 4 and 5 were 0.72, 0.28 and 0.60, respectively. The NPV of category 3 was 93%, whereas the PPV of category 5 was 97%. All of these values decreased when the breast imaging experience of the observer was decreased. The PPVs of subcategories 4a, 4b and 4c were 56%, 88% and 69%, respectively. Conclusions Using a BI-RADS final assessment, radiologists with sufficient breast imaging experience can provide accurate and consistent assessments for breast ultrasonography; however, diagnostic agreement decreases as the breast imaging experience of the radiologist is decreased.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call