Abstract

This study develops an accurate method based on the generative adversarial network (GAN) that targets the issue of the current discontinuity of micro vessel segmentation in the retinal segmentation images. The processing of images has become increasingly efficient since the advent of deep learning method. We have proposed an improved GAN combined with SE-ResNet and dilated inception block for the segmenting retinal vessels (SAD-GAN). The GAN model has been improved with respect to the following points. (1) In the generator, the original convolution block is replaced with SE-ResNet module. Furthermore, SE-Net can extract the global channel information, while concomitantly strengthening and weakening the key features and invalid features, respectively. The residual structure can alleviate the issue of gradient disappearance. (2) The inception block and dilated convolution are introduced into the discriminator, which enhance the transmission of features and expand the acceptance domain for improved extraction of the deep network features. (3) We have included the attention mechanism in the discriminator for combining the local features with the corresponding global dependencies, and for highlighting the interdependent channel mapping. SAD-GAN performs satisfactorily on public retina datasets. On DRIVE dataset, ROC_AUC and PR_AUC reach 0.9813 and 0.8928, respectively. On CHASE_DB1 dataset, ROC_AUC and PR_AUC reach 0.9839 and 0.9002, respectively. Experimental results demonstrate that the generative adversarial model, combined with deep convolutional neural network, enhances the segmentation accuracy of the retinal vessels far above that of certain state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.