Abstract

Although U-Net and its variants have achieved some great successes in medical image segmentation tasks, their segmentation performances for small objects are still unsatisfactory. Therefore, in this work, a new deep model, ω-Net, is proposed to achieve more accurate medical image segmentations. The advancements of ω-Net are mainly threefold: First, it incorporates an additional expansive path into U-Net to import an extra supervision signal and obtain a more effective and robust image segmentation by dual supervision. Then, a multi-dimensional self-attention mechanism is further developed to highlight salient features and suppress irrelevant ones consecutively in both spatial and channel dimensions. Finally, to reduce semantic disparity between the feature maps of the contracting and expansive paths, we further propose to integrate diversely-connected multi-scale convolution blocks into the skip connections, where several multi-scale convolutional operations are connected in both series and parallel. Extensive experimental results on three abdominal CT segmentation tasks show that (i) ω-Net greatly outperforms the state-of-the-art image segmentation methods in medical image segmentation tasks; (ii) the proposed three advancements are all effective and essential for ω-Net to achieve the superior performances; and (iii) the proposed multi-dimensional self-attention (resp., diversely-connected multi-scale convolution) is more effective than the state-of-the-art attention mechanisms (resp., multi-scale solutions) for medical image segmentations. The code will be released online after this paper is formally accepted.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.