Abstract

ABSTRACTThe convolutional neural network has significantly enhanced the efficacy of medical image segmentation. However, challenges persist in the deep learning‐based method for medical image segmentation, necessitating the resolution of the following issues: (1) Medical images, characterized by a vast spatial scale and complex structure, pose difficulties in accurate edge information extraction; (2) In the decoding process, the assumption of equal importance among different channels contradicts the reality of their varying significance. This study addresses challenges observed in earlier medical image segmentation networks, particularly focusing on the precise extraction of edge information and the inadequate consideration of inter‐channel importance during decoding. To address these challenges, we introduce ResTrans‐Unet (residual transformer medical image segmentation network), an automatic segmentation model based on Residual‐aware transformer. The Transformer is enhanced through the incorporation of ResMLP, resulting in enhanced edge information capture in images and improved network convergence speed. Additionally, Squeeze‐and‐Excitation Networks, which emphasize channel relationships, are integrated into the decoder to precisely highlight important features and suppress irrelevant ones. Experimental validations on two public datasets were carried out to assess the proposed model, comparing its performance with that of advanced models. The experimental results unequivocally demonstrate the superior performance of ResTrans‐Unet in medical image segmentation tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.