Abstract

The Breast Imaging Reporting and Data System (BI-RADS) was introduced in 1993 to standardize the interpretation of mammograms. Though many studies have assessed the validity of the system, fewer have examined its reliability. Our objective is to identify predictors of reliability as measured by the kappa statistic. We identified studies conducted between 1993 and 2009 which reported kappa values for interpreting mammograms using any edition of BI-RADS. Bivariate and multivariate multilevel analyses were used to examine associations between potential predictors and kappa values. We identified ten eligible studies, which yielded 88 kappa values for the analysis. Potential predictors of kappa included: whether or not the study included negative cases, whether single- or two-view mammograms were used, whether or not mammograms were digital versus screen-film, whether or not the fourth edition of BI-RADS was utilized, the BI-RADS category being evaluated, whether or not readers were trained, whether or not there was an overlap in readers' professional activities, the number of cases in the study and the country in which the study was conducted. Our best multivariate model identified training, use of two-view mammograms and BI-RADS categories (masses, calcifications, and final assessments) as predictors of kappa. Training, use of two-view mammograms and focusing on mass description may be useful in increasing reliability in mammogram interpretation. Calcification and final assessment descriptors are areas for potential improvement. These findings are important for implementing policies in BI-RADS use before introducing the system in different settings and improving current implementations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call