Abstract

Abstract Purpose: Despite mammographic breast mass change being the most important finding in characterizing breast cancer in young women with dense breasts, a mammogram (Mg) is susceptible to false positives and false negatives in distinguishing between benign and malignant breast mass images. Additional tests such as ultrasound and biopsy might be needed to decide results. Setting aside the improvements in classifying breast mass Mg images via deep learning (DL), obtaining large training data and ensuring generalizations across different datasets with robust and well-optimized algorithms is a challenge. ImageNet based transfer learning has been utilized to address the unavailability of large datasets and robust algorithms. However, it is yet to achieve the desired accuracy, sensitivity, and specificity for DL models to be used as a standalone tool. Furthermore, previous works are computationally infeasible where exhaustive patch separation is carried out to segment the region of interest before training, which makes processing computationally complex and time-consuming. Here we propose a novel deep learning method based on multistage transfer learning from ImageNet and cancer cell line images pre-trained EfficientNetB2 model to classify mammographic breast mass as either benign or malignant. Methods: We trained our model on three publicly available datasets, 13,128 Digital Database for Screening Mammography (DDSM), 7632 INbreast, and 3816 Mammographic Image Analysis Society (MIAS) Mg breast mass images. Additionally, we trained our model on a mixed dataset of images from the three datasets to evaluate robustness. Data were sorted into 6:2:2 ratio for training, validation, and test, respectively. The microscopic cancer cell line dataset size was 38, 080 images. Results: We obtained an average 5-fold cross-validation AUC of 0.9999, test accuracy of 99.99%, sensitivity of 1, and specificity of 0.9998 on DDSM, AUC of 0.9997, test accuracy of 99.99%, sensitivity of 0.9972, and specificity of 0.9988 on INbreast, and AUC of 0.9987, test accuracy of 99.89%, sensitivity of 0.9987, specificity of 1 on MIAS, and AUC of 0.9997, test accuracy of 99.91%, sensitivity of 0.9993, and specificity of 0.9989 on the mixed dataset. Moreover, we acquired a P-value of 0.019 in the investigation of a statistically significant improvement in test accuracy from using our method compared to the conventional ImageNet based transfer learning on DDSM dataset. Conclusion: Our study suggests that utilizing cancer cell line images further improved the learning process alleviating the need for large Mg training data. Moreover, our method achieved better performance without applying the computationally complex patch separation task. The findings of this study are of crucial importance in the early diagnosis of breast cancer in young women with dense breasts where mammography struggles. Citation Format: Gelan Ayana, Jinhyung Park, Se-woon Choe. Patchless deep transfer learning for improved mammographic breast mass classification [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 5052.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.