Background:Ultrasound (US) is a medical imaging modality that plays a crucial role in the early detection of breast cancer. The emergence of numerous deep learning systems has offered promising avenues for the segmentation and classification of breast cancer tumors in US images. However, challenges such as the absence of data standardization, the exclusion of non-tumor images during training, and the narrow view of single-task methodologies have hindered the practical applicability of these systems, often resulting in biased outcomes. This study aims to explore the potential of multi-task systems in enhancing the detection of breast cancer lesions. Methods:To address these limitations, our research introduces an end-to-end multi-task framework designed to leverage the inherent correlations between breast cancer lesion classification and segmentation tasks. Additionally, a comprehensive analysis of a widely utilized public breast cancer ultrasound dataset named BUSI was carried out, identifying its irregularities and devising an algorithm tailored for detecting duplicated images in it. Results:Experiments are conducted utilizing the curated dataset to minimize potential biases in outcomes. Our multi-task framework exhibits superior performance in breast cancer respecting single-task approaches, achieving improvements close to 15% in segmentation and classification. Moreover, a comparative analysis against the state-of-the-art reveals statistically significant enhancements across both tasks. Conclusion:The experimental findings underscore the efficacy of multi-task techniques, showcasing better generalization capabilities when considering all image types: benign, malignant, and non-tumor images. Consequently, our methodology represents an advance towards more general architectures with real clinical applications in the breast cancer field.
Read full abstract