Segmenting breast tumors in ultrasound images is pivotal for computer-aided diagnosis (CAD) systems aimed at detecting breast cancer. This research contributes directly to the Sustainable Development Goal (SDG), good health and well-being by leveraging innovative deep learning technique to boost the accuracy of breast tumor segmentation. Achieving accurate segmentation is crucial for precise tumor size, shape, and location determination, which in turn facilitates subsequent tumor quantification and classification. However, this task presents challenges, particularly when dealing with small tumors. The complexity arises from factors such as speckle noise, variations in tumor shapes and sizes across patients, and the presence of tumor-like regions in the images. Although deep learning-based methods have shown remarkable success in biomedical image analysis, the current state-of-the-art approaches still struggle to achieve satisfactory performance in segmenting small breast tumors. In this paper, BUS (Breast Ultrasound) dataset was initially applied on state of-the art DeepLabV3 + architecture for the segmentation of breast lesions in ultrasound images. After that to focus on more informative features, a modified DeepLabV3 + has been proposed by integrating a CBAM module in encoder as well as decoder. At the end, comparative analysis of state-of-art DeepLabV3 + model and proposed modified DeepLabV3 + has also been presented in terms of performance metrics like Dice Coefficient, Intersection over Union, Precision, Recall and Specificity. The proposed model demonstrates outstanding performance across different evaluation metrics achieving precision, recall, specificity, dice coefficient and Intersection over Union (IoU) values of 0.974, 0.933, 0.997, 0.951, and 0.933. respectively.
Read full abstract