<p>Breast ultrasound images are highly valuable for the early detection of breast cancer. However, the drawback of these images is low-quality resolution and the presence of speckle noise, which affects their interpretability and makes them radiologists’ expertise-dependent. As medical images, breast ultrasound datasets are scarce and imbalanced, and annotating them is tedious and time-consuming. Transfer learning, as a deep learning technique, can be used to overcome the dataset deficiency in available images. This paper presents the implementation of transfer learning U-Net backbones for the automatic segmentation of breast ultrasound lesions and implements a threshold selection mechanism to deliver optimal generalized segmentation results of breast tumors. The work uses the public breast ultrasound images (BUSI) dataset and implements ten state-of-theart candidate models as U-Net backbones. We have trained these models with a five-fold cross-validation technique on 630 images with benign and malignant cases. Five out of ten models showed good results, and the best U-Net backbone was found to be DenseNet121. It achieved an average Dice coefficient of 0.7370 and a sensitivity of 0.7255. The model’s robustness was also evaluated against normal cases, and the model accurately detected 72 out of 113 images, which is higher than the four best models.</p>
Read full abstract