Background and Objective:Breast cancer is the most common cancer type among women worldwide and a leading cause of female death. Accurately interpreting these complex tumors, involving small size and morphology, requires a significant amount of expertise and time. Developing a breast tumor segmentation model to assist clinicians in treatment, therefore, holds great practical significance. Methods:We propose a multi-scale, multi-task model framework named MTF-UNet. Firstly, we differ from the common approach of using different convolution kernel sizes to extract multi-scale features, and instead use the same convolution kernel size with different numbers of convolutions to obtain multi-scale, multi-level features. Additionally, to better integrate features from different levels and sizes, we extract a new multi-branch feature fusion block (ADF). This block differs from using channel and spatial attention to fuse features, but considers fusion weights between various branches. Secondly, we propose to use the number of pixels predicted to be related to tumors and background to assist segmentation, which is different from the conventional approach of using classification tasks to assist segmentation. Results:We conducted extensive experiments on our proprietary DCE-MRI dataset, as well as two public datasets (BUSI and Kvasir-SEG). In the aforementioned datasets, our model achieved excellent MIoU scores of 90.4516%, 89.8408%, and 92.8431% on the respective test sets. Furthermore, our ablation study has demonstrated the efficacy of each component and the effective integration of our auxiliary prediction branch into other models. Conclusion:Through comprehensive experiments and comparisons with other algorithms, the effectiveness, adaptability, and robustness of our proposed method have been demonstrated. We believe that MTF-UNet has great potential for further development in the field of medical image segmentation. The relevant code and data can be found at https://github.com/LCUDai/MTF-UNet.git.
Read full abstract