Abstract
Retinal blood vessels, the diagnostic bio-marker of ophthalmologic and diabetic retinopathy, utilise thick and thin vessels for diagnostic and monitoring purposes. The existing deep learning methods attempt to segment the retinal vessels using a unified loss function. However, a difference in spatial features of thick and thin vessels and a biased distribution creates an imbalanced thickness, rendering the unified loss function to be useful only for thick vessels. To address this challenge, a patch-based generative adversarial network-based technique is proposed which iteratively learns both thick and thin vessels in fundoscopic images. It introduces an additional loss function that allows the generator network to learn thin and thick vessels, while the discriminator network assists in segmenting out both vessels as a combined objective function. Compared with state-of-the-art techniques, the proposed model demonstrates the enhanced accuracy, sensitivity, specificity, and area under the receiver operating characteristic curves on STARE, DRIVE, and CHASEDB1 datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.