Abstract

Automatic segmentation of breast lesions from ultrasound images plays an important role in computer-aided breast cancer diagnosis. Many deep learning methods based on convolutional neural networks (CNNs) have been proposed for breast ultrasound image segmentation. However, breast ultrasound image segmentation is still challenging due to ambiguous lesion boundaries. We propose a novel dual-stage framework based on Transformer and Multi-layer perceptron (MLP) for the segmentation of breast lesions. We combine the Swin Transformer block with an efficient pyramid squeezed attention block in a parallel design and introduce bi-directional interactions across branches, which can efficiently extract multi-scale long-range dependencies to improve the segmentation performance and robustness of the model. Furthermore, we introduce tokenized MLP block in the MLP stage to extract global contextual information while retaining fine-grained information to segment more complex breast lesions. We have conducted extensive experiments with state-of-the-art methods on three breast ultrasound datasets, including BUSI, BUL, and MT_BUS datasets. The dice coefficient reached 0.8127 ± 0.2178, and the intersection over union reached 0.7269 ± 0.2370 on benign lesions when the Hausdorff distance was maintained at 3.75 ± 1.83. The dice coefficient of malignant lesions is improved by 3.09% for BUSI dataset. The segmentation results on the BUL and MT_BUS datasets also show that our proposed model achieves better segmentation results than other methods. Moreover, the external experiments indicate that the proposed model provides better generalization capability for breast lesion segmentation. The dual-stage scheme and the proposed Transformer module achieve the fine-grained local information and long-range dependencies to relieve the burden of radiologists.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call