Abstract

Fundus lesion segmentation determines the location and size of diabetes retinopathy in fundus image, which assists doctors in developing the best eye treatment plan. However, owing to the scattered distribution and the similarity of lesions, it is extremely difficult to extract representative lesions feature and accurately segment lesions area. To solve the thorny problem, a generative adversarial network with multi-attention feature extraction is developed to segment diabetic retinopathy region. The main contributions are as follows: (1) An improved residual U-Net network combining with self-attention mechanism is designed as generative network to fully extract local and global feature of lesions while reducing the loss of key feature information. Considering the correlation between the same lesions feature of different samples, external attention mechanism is introduced in the residual U-Net network to focus on the relevant features of the same lesions in different samples throughout the entire dataset. (2) A discriminative network based on the PatchGAN structure is designed to further enhance the segmentation ability of generation network by discriminating between true and false samples. The proposed network is evaluated on the public dataset IDRiD, which achieved the Dice correlation coefficients of 75.7%, 76.53%, 50.06%, and 45.89% for EX, SE, MA, and HE, respectively. The experimental results show the generative adversarial neural network qualified for accurate segmentation of diabetic retinopathy from fundus image well.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call