Abstract

In recent years, semantic segmentation method based on deep learning provides advanced performance in medical image segmentation. As one of the typical segmentation networks, U-Net is successfully applied to multimodal medical image segmentation. A recurrent residual convolutional neural network with attention gate connection (R2AU-Net) based on U-Net is proposed in this paper. It enhances the capability of integrating contextual information by replacing basic convolutional units in U-Net by recurrent residual convolutional units. Furthermore, R2AU-Net adopts attention gates instead of the original skip connection. In this paper, the experiments are performed on three multimodal datasets: ISIC 2018, DRIVE, and public dataset used in LUNA and the Kaggle Data Science Bowl 2017. Experimental results show that R2AU-Net achieves much better performance than other improved U-Net algorithms for multimodal medical image segmentation.

Highlights

  • Medical image plays a key role in medical treatment

  • U-Net adds multiple skip connections between the encoder and decoder, which can transfer the features of the shallow network to the deep network. us, it can help the decoding path recover the details of the image better

  • Experimental Results e experiments are performed on three datasets: DRIVE, International Skin Imaging Collaboration (ISIC) 2018, and public dataset used in LUNA and the Kaggle Data Science Bowl 2017. e following performance indicators are adopted in this paper: True positive (TP), true negative (TN), false positive (FP), and false negative (FN), including accuracy (AC), sensitivity (SE), specificity (SP), and F1-score (F1)

Read more

Summary

Introduction

Medical image plays a key role in medical treatment. Computer-aided diagnosis (CAD) is designed to provide doctors with accurate interpretation of medical images systematically so as to treat the patients better. Ciresan et al [1] trained networks in sliding windows to predict class tags for each pixel by providing local areas (patch) around pixels. Without traditional full connection layer, it uses deconvolution to restore original images at the last layer of network. U-Net adds multiple skip connections between the encoder and decoder, which can transfer the features of the shallow network to the deep network. E original U-Net relies on multicascaded CNN, which results in the waste of computing resources and the increase of the number of parameters. An extended version of U-Net is proposed, which uses recurrent residual convolutional neural networks with attention gate connection (R2AU-Net) for medical image segmentation. E contributions of this paper can be summarized as follows: firstly, R2AU-Net uses more attention gates (AGs) to deal with deep features and shallow features. R2AU-Net is evaluated on three datasets: retinal vascular segmentation (DRIVE dataset), skin lesion segmentation (ISIC 2018 dataset), and lung nodule segmentation (lung dataset)

Proposed Method
Experimental Results e experiments are performed on three datasets
Methods
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.