Background:Digital breast tomosynthesis (DBT) has been widely adopted as a supplemental imaging modality for diagnostic evaluation of breast cancer and confirmation studies. In this study, a deep learning-based method for characterizing breast tissue patterns in DBT data is presented. Methods:A set of 5388 2D image patches was produced from 230 right mediolateral oblique, 259 left mediolateral oblique, 18 right craniocaudal, and 15 left craniocaudal single-breast DBT studies, using slice-wise annotations of abnormalities and normal tissue. We implemented a patch classifier to predict samples according to two differing scenarios and train it using the patch dataset. First, tissue samples were classified into the following classes: malignant, benign, and normal breast tissue. Second, tissue samples were classified into the following classes: malignant mass, benign mass, benign architectural distortion, malignant architectural distortion, and normal breast tissue. We employed transfer learning and initialized the model base layers with existing pre-trained weights obtained from Globally-Aware Multiple Instance Classifier. Results:High class-wise recall values of 0.8906, 0.8541 and 0.7345 and specificities 0.9558, 0.9575 and 0.8830 were obtained for normal, benign, and malignant classification, respectively. More intricate classification yielded class-wise recall values of 0.8708, 0.8299, 0.9444 and 0.5723 and specificities 0.9406, 0.9833, 0.8943 and 0.9652 for benign mass, normal, malignant architectural distortion, and malignant mass, respectively. However, benign architectural distortion was confused with benign mass and malignant architectural distortion. Conclusions:Combining the proposed phenotype classifier with the commonly used malignant-benign-normal classification enables a more detailed assessment of digital breast tomosynthesis images.
Read full abstract