Abstract

Thanks to the better performance of U-Net in medical segmentation, many U-Net variants have emerged one after another, but U-Net has the non-negligible drawback that it cannot accurately segment low-level features such as edge regions of images, and in addition this simple skip connection of U-Net itself is still a challenge for global information modeling. To solve the above problems, Channel-wise Cross Fusion Transformer (CCT) and Channel-wise Cross Attention (CCA) are introduced on the basis of U-Net, where CCT is used for cross fusion of U-Net encoders and Transformer, and CCA interacts the fused features with the decoder features to eliminate semantic gaps, naming the network Trans-Net. Another branch network SeU-Net is built to capture details and edge regions, and SE-Attention is added at the skip joints of the network to reinforce important features. The two branches interact through a Cross Residual Feature Block (CRFB). By testing on five datasets, it was experimentally demonstrated that the method proposed in this paper has more accurate segmentation performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.