Abstract

Segmentation of anatomical structures in ultrasound images is a challenging task due to existence of artifacts inherit to the modality such as speckle noise, attenuation, shadowing, uneven textures and blurred boundaries. This paper presents a novel attention-based predict–refine network, called ACU2E-Net, for segmentation of soft-tissue structures in ultrasound images. The network consists of two modules: a predict module, which is built upon our newly proposed attentive coordinate convolution; and a novel multi-head residual refinement module, which consists of three parallel residual refinement modules. The attentive coordinate convolution is designed to improve the segmentation accuracy by perceiving the shape and positional information of the target anatomy. The proposed multi-head residual refinement module reduces both segmentation biases and variances by integrating residual refinement and ensemble strategies. Moreover, it avoids multi-pass training and inference commonly seen in ensemble methods. To show the effectiveness of our method, we collect a comprehensive dataset of thyroid ultrasound scans from 12 different imaging centers, and evaluate our proposed network against state-of-the-art segmentation methods. Comparisons against state-of-the-art models demonstrate the competitive performance of our newly designed network on both the transverse and sagittal thyroid images. Ablation studies show that proposed modules improve the segmentation Dice score of the baseline model from 79.62% to 80.97% and 82.92% while reducing the variance from 6.12% to 4.67% and 3.21% in transverse and sagittal views, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.