Abstract

Deep learning has shown promise in the field of computer vision for image recognition. We evaluated two deep transfer learning techniques (feature extraction and fine-tuning) in the diagnosis of breast cancer compared to a lesion-based radiomics computer-aided diagnosis (CAD) method. The dataset included a total of 2006 breast lesions (1506 malignant and 500 benign) that were imaged with dynamic contrast-enhanced MRI. Pre-contrast, first post-contrast, and second post-contrast timepoint images for each lesion were combined to form an RGB image, which subsequently served as input to a VGG19 convolutional neural network (CNN) pre-trained on the ImageNet database. The first transfer learning technique was feature extraction conducted by extracting feature output from each of the five max-pooling layers in the trained CNN, average-pooling the features, performing feature reduction, and merging the CNN-features with a support vector machine in the classification of malignant and benign lesions. The second transfer learning method used a 64% training, 16% validation, and 20% testing dataset split in the fine-tuning of the final fully connected layers of the pretrained VGG19 to classify the images as malignant or benign. The performance of each of the three CAD methods were evaluated using receiver operating characteristic (ROC) analysis with area under the ROC curve (AUC) as the performance metric in the task of distinguishing between malignant and benign lesions. The performance of the radiomics CAD (AUC = 0.90) was significantly better than that of the CNN-feature-extraction (AUC = 0.84; p<0.0001), however, we failed to show a significant difference with the fine-tuning method (AUC = 0.86; p=0.1251), and thus, we conclude that transfer learning shows potential as a comparable computer-aided diagnosis technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call