Abstract

BackgroundWhile there is much literature describing the radiologic detection of breast cancer, there are limited data available on the agreement between experts when delineating and classifying breast lesions. The aim of this work is to measure the level of agreement between expert radiologists when delineating and classifying breast lesions as demonstrated through Breast Imaging Reporting and Data System (BI-RADS) and quantitative shape metrics. MethodsForty mammographic images, each containing a single lesion, were presented to nine expert breast radiologists using a high specification interactive digital drawing tablet with stylus. Each reader was asked to manually delineate the breast masses using the tablet and stylus and then visually classify the lesion according to the American College of Radiology (ACR) BI-RADS lexicon. The delineated lesion compactness and elongation were computed using Matlab software. Intraclass Correlation Coefficient (ICC) and Cohen's kappa were used to assess inter-observer agreement for delineation and classification outcomes, respectively. ResultsInter-observer agreement was fair for BI-RADS shape (kappa = 0.37) and moderate for margin (kappa = 0.58) assessments. Agreement for quantitative shape metrics was good for lesion elongation (ICC = 0.82) and excellent for compactness (ICC = 0.93). ConclusionsFair to moderate levels of agreement was shown by radiologists for shape and margin classifications of cancers using the BI-RADS lexicon. When quantitative shape metrics were used to evaluate radiologists' delineation of lesions, good to excellent inter-observer agreement was found. The results suggest that qualitative descriptors such as BI-RADS lesion shape and margin understate the actual level of expert radiologist agreement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call