Abstract
Automatic breast ultrasound image segmentation helps radiologists to improve the accuracy of breast cancer diagnosis. In recent years, the convolutional neural networks (CNNs) have achieved great success in medical image analysis. However, it exhibits limitations in modeling long-range relations, which is unfavorable for ultrasound images with speckle noise and shadows, resulting in decreased accuracy of breast lesion segmentation. Transformer can obtain sufficient global information, but it is deficient in acquiring local details and needs to be pre-trained on large-scale datasets. In this paper, we propose a Hybrid CNN-Transformer network (HCTNet) for boosting the breast lesion segmentation in ultrasound images. In the encoder of HCTNet, Transformer Encoder Blocks (TEBlocks) are designed to learn the global contextual information, which are combined with CNNs to extract features. In the decoder of HCTNet, a Spatial-wise Cross Attention (SCA) module is developed based on the spatial attention mechanism, which reduces the semantic discrepancy with the encoder. Moreover, residual connection is used between decoder blocks to make the generated features more discriminative by aggregating contextual feature maps at different semantic scales. Extensive experiments on three public breast ultrasound datasets demonstrate that HCTNet outperforms other medical image segmentation methods and the recent semantic segmentation methods on breast ultrasound lesion segmentation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.