Abstract
Abstract Background: The most important issue of the overall performance of traditional CAD is effective feature extraction. However, extract a meaningful image feature is a complicated and time-consuming task. This makes fine tuning of the overall performance of traditional CAD more difficult. Currently, the deep learning was considered the most advanced technology in image classification. The main benefit of deep learning is decreasing the feature selection and classification burden of generating a transformation function set and feature of the image from the data directly. We bring up a modified deep learning algorithm in classifying the benign and malignant patients in breast ultrasound image. Methods: There are 150 benign breast tumor patients and 120 malignant patients were enrolled from January 1, 2017 to December 31, 2018. The ultrasound images are captured at the full-view of lesion in US image by experienced physicians, and the tumor size measuring was according to the largest diameter of the tumor. The patients’ ages ranged was from 35 to 75 years, and the classification of benign or malignant were pathologically proven (either by fine needle cytology, core-needle biopsy or open biopsy). A modified deep residual network was developed and utilized to generate the US image feature for classification. The 5-fold cross-validation was used to test error percentage, mean, standard deviation and the 95% level confidence interval for the baseline algorithms. The accuracy of diagnosis was estimated using the area under the receiver operating characteristic (ROC) curve (AUC) and compared with DeLong’s non-parametric test. Results: In this study, after using the 373 ultrasonic image dataset (all images were above BI-RADS C3, according to the report) for training and testing this modified model with 5-fold cross-validation, the AUC = 0.84 (SE = 0.81 to 0.87). The sensitivity is 77.08 and the specificity is 91.07 (p < 1x10e-5). With the accuracy as an indicator, the evaluation for classifying breast tumors in this study presents the similar performance to the subjective categories determined by physicians. Conclusions: Comparing to the previous studies, the significant improvement of this study is the input was the full-view of the image, and waive the preprocess in tumor region selection. It decreases the human effort and make the automated computer aids diagnosis possible. By using the deep residual network, the performance of classification was not inferior previous studies. It represents the deep learning have its merits in benign and malignant classification of breast ultrasound image. Citation Format: Weichung Shia, Darren Chen. Using deep residual networks for malignant and benign classification of two-dimensional Doppler breast ultrasound imaging [abstract]. In: Proceedings of the 2019 San Antonio Breast Cancer Symposium; 2019 Dec 10-14; San Antonio, TX. Philadelphia (PA): AACR; Cancer Res 2020;80(4 Suppl):Abstract nr P1-02-10.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have