Abstract

Breast ultrasound (BUS) imaging is commonly used for breast cancer diagnosis, but the interpretation of BUS images varies based on the radiologist's experience. Computer-aided diagnosis (CAD) systems have been proposed to provide the radiologist with an objective, computer-based classification of BUS images. Nevertheless, the majority of these systems are based on handcrafted features that are designed manually to quantify the tumor. Hence, the accuracy of these CAD systems depends on the capability of the handcrafted features to differentiate between benign and malignant tumors. Convolutional neural networks (CNNs) provide a promising approach to improve the classification of BUS images due to their ability to achieve data-driven extraction of objective, accurate, and generalizable image representations. However, the limited size of the available BUS image databases might restrict the capability of training the CNNs from scratch. To address this limitation, we investigate the use of two approaches, namely the deep features extraction approach and transfer learning approach, to enable the use of a pre-trained CNN model to achieve accurate classification of BUS images. The results show that the deep features extraction approach outperforms the transfer learning approach. Moreover, the results indicate that the extraction of deep features from the pre-trained CNN model, which is combined with effective features selection, has enabled accurate BUS image classification with accuracy, sensitivity, and specificity values of 93.9%, 95.3%, and 92.5%, respectively. These results suggest the feasibility of combining deep features extracted from pre-trained CNN models with effective features selection algorithms to achieve accurate BUS image classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call