Abstract

Retinal blood vessels, the diagnostic bio-marker of ophthalmologic and diabetic retinopathy, utilise thick and thin vessels for diagnostic and monitoring purposes. The existing deep learning methods attempt to segment the retinal vessels using a unified loss function. However, a difference in spatial features of thick and thin vessels and a biased distribution creates an imbalanced thickness, rendering the unified loss function to be useful only for thick vessels. To address this challenge, a patch-based generative adversarial network-based technique is proposed which iteratively learns both thick and thin vessels in fundoscopic images. It introduces an additional loss function that allows the generator network to learn thin and thick vessels, while the discriminator network assists in segmenting out both vessels as a combined objective function. Compared with state-of-the-art techniques, the proposed model demonstrates the enhanced accuracy, sensitivity, specificity, and area under the receiver operating characteristic curves on STARE, DRIVE, and CHASEDB1 datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call