Accurate and robust medical image segmentation is crucial for assisting disease diagnosis, making treatment plan, and monitoring disease progression. Adaptive to different scale variations and regions of interest is essential for high accuracy in automatic segmentation methods. Existing methods based on the U-shaped architecture respectively tackling intra- and inter-scale problem with a hierarchical encoder, however, are restricted by the scope of multi-scale modeling. In addition, global attention and scaling attention in regions of interest have not been appropriately adopted, especially for the salient features. To address these two issues, we propose a ConvNet-Transformer hybrid framework named SSCFormer for accurate and versatile medical image segmentation. The intra-scale ResInception and inter-scale transformer bridge are designed to collaboratively capture the intra- and inter-scale features, facilitating the interaction of small-scale disparity information at a single stage with large-scale from multiple stages. Global attention and scaling attention are cleverly integrated from a spatial-channel-aware perspective. The proposed SSCFormer is tested on four different medical image segmentation tasks. Comprehensive experimental results show that SSCFormer outperforms the current state-of-the-art methods.
Read full abstract