Abstract

In recent years, the transformer-based methods such as TransUNet and SwinUNet have been successfully applied in the research of medical image segmentation. However, these methods are all high-to-low resolution network by recovering high-resolution feature representations from low-resolution. This kind of structure led to loss of low-level semantic information in encoder stage. In this paper, we propose a new framework named MR-Trans to maintain high-resolution and low-resolution feature representations simultaneously. MR-Trans consists of three modules, namely a branch partition module, an encoder module and a decoder module. We construct multi-resolution branches with different resolutions in branch partition stage. In encoder module, we adopt Swin Transformer method to extract long-range dependencies on each branch and propose a new feature fusion strategy to fuse features with different scales between branches. A novel decoder network is proposed in MR-Trans by combining the PSPNet and FPNet at the same time to improve the recognition ability at different scales. Extensive experiments on two different datasets demonstrate that our method achieves better performance than other previous state-of-the-art methods for medical image segmentation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call