Abstract

PurposeTo utilize a neural architecture search (NAS) approach to develop a convolutional neural network (CNN) method for distinguishing benign and malignant lesions on breast cone-beam CT (BCBCT). Method165 patients with 114 malignant and 86 benign lesions were collected by two institutions from May 2012 to August 2014. The NAS method autonomously generated a CNN model using one institution’s dataset for training (patients/lesions: 71/91) and validation (patients/lesions: 20/23). The model was externally tested on another institution’s dataset (patients/lesions: 74/87), and its performance was compared with fine-tuned ResNet-50 models and two breast radiologists who independently read the lesions in the testing dataset without knowing lesion diagnosis. ResultsThe lesion diameters (mean ± SD) were 18.8 ± 12.9 mm, 22.7 ± 10.5 mm, and 20.0 ± 11.8 mm in the training, validation, and external testing set, respectively. Compared to the best ResNet-50 model, the NAS-generated CNN model performed three times faster and, in the external testing set, achieved a higher (though not statistically different) AUC, with sensitivity (95% CI) and specificity (95% CI) of 0.727, 80% (66–90%), and 60% (42–75%), respectively. Meanwhile, the performances of the NAS-generated CNN and the two radiologists’ visual ratings were not statistically different. ConclusionsOur preliminary results demonstrated that a CNN autonomously generated by NAS performed comparably to pre-trained ResNet models and radiologists in predicting malignant breast lesions on contrast-enhanced BCBCT. In comparison to ResNet, which must be designed by an expert, the NAS approach may be used to automatically generate a deep learning architecture for medical image analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call