Abstract

Abstract Background: 3D digital breast tomosynthesis (DBT) offers greater sensitivity than 2D mammography to the presence of architectural distortions (AD), a suspicious finding requiring biopsy. This higher sensitivity to AD is only beneficial if DBT can differentiate benign AD, such as radial scars (RS) from malignant AD (MAD). Automated analysis of clinical images using a neural network has potential to enhance the accuracy of breast cancer diagnosis, but such applications focus on screening and currently struggle with more challenging diagnoses (e.g. MAD vs. RS) where computer assistance would provide great clinical utility. In this work, we define a deep learning (DL) framework utilizing amalgamate predictions from a group of neural networks, specialized for multiple lesion regions and different acquisitional views to distinguish MAD from RS. Methods: A retrospective analysis was conducted of 69 patients screened at a single institution, each with AD visible on one or more DBT views. Stereotactic core biopsy determined 42 AD to be MAD and the remaining 27 to be RS. An attending breast radiologist manually identified a circular region of interest (ROI) containing the AD on a single slice of the mediolateral (ML, n=72) and/or craniocaudal (CC, n=68) views. 56 patients were utilized for training and tuning the DL networks in cross-validation, while 25 volumes from 13 patients (8 MAD, 5 RS) were held for independent testing. Volumetric samples of size 24x24x12 pixels were extracted randomly within each ROI for input to the networks. Two neural networks were trained to separately recognize patterns of MAD at the intra- and peri-lesional region. Pairs of networks were trained separately for analysis of the CC and the MLO view, resulting in 4 possible network variations. These networks were first assessed individually by cross-validation in the training set, then predictions from each were combined for diagnosis in the independent testing set. Performance was assessed by area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. Results: All networks individually achieved strong performance in distinguishing RS from MAD when assessed via cross-validation in the training set. On the independent test set, networks trained to recognize patterns of malignancy at the center of the distortion outperformed those focused on the lesion margin. The CC view was found to be most discriminating in neural network assessment. Independent Test PerformanceNetwork TypeAUCSensitivitySpecificityIntralesional, CC0.650.800.50Perilesional, CC0.510.400.63Intralesional, ML0.490.600.38Perilesional, ML0.550.600.50Combined Performance0.800.601.0 When predictions from these networks were combined for diagnosis on the independent validation set, performance increased significantly to an AUC of 0.8, with a sensitivity of 0.6 and a specificity of 1.0. Conclusions: In this investigation, we demonstrated that specialized deep learning classifiers, created to address the diagnosis of MAD vs RS, can assist in diagnostic settings without generating false negatives. Our approach combines neural networks to make a diagnosis based on various spatial regions from multiple DBT views. Our approach demonstrated very high specificity which may reduce the number of benign RS biopsies and surgical excisions for AD detected by DBT. Future work will further validate these findings on a larger, multi-institutional cohort and explore their influence on clinical decision-making. Citation Format: Tristan Maidment, Nathaniel Braman, Yijiang Chen, Farhad Mehrkhani, Uliyana Yankevich, Donna Plecha, Anant Madabhushi. A combination of intra- and peri-lesional deep learning classifiers from multiple views enables accurate diagnosis of architectural distortion malignancy with digital breast tomosynthesis [abstract]. In: Proceedings of the 2019 San Antonio Breast Cancer Symposium; 2019 Dec 10-14; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2020;80(4 Suppl):Abstract nr PD9-03.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call