Abstract

Glaucoma is a chronic, degenerative optic neuropathy and the leading cause of irreversible blindness worldwide. Individuals with glaucoma do not show typical symptoms for years and can become advanced before patients notice an extensive visual field loss. Therefore, early detection and treatment are crucial to prevent vision loss from this blinding disease. Vertical cup-to-disc ratio (VCDR), the ratio of the vertical diameter of the cup over the vertical diameter of the disc in the optic nerve head region, is an important structural indicator for glaucoma. Estimation of VCDR requires accurate segmentation of optic disc (OD) and optic cup (OC) on fundus images. However, manual annotation of the disc and cup area is time-consuming and is subjective to personal experience and opinion. In this study, we proposed an automated deep learning approach for OD and OC segmentation and VCDR derivation from fundus images using Detectron2, a state-of-the-art object detection platform. We trained Mask R-CNN models for OD and OC segmentation and VCDR evaluation. We assessed the performance of our method using the Retinal Fundus Glaucoma Challenge (REFUGE) dataset in terms of the Dice similarity coefficient (DSC) for OD and OC, and the mean absolute error (MAE) for VCDR. Our method achieved highly accurate results with a DSC of 0.9622 for OD, a DSC of 0.8870 for OC, and an MAE of 0.0376 for VCDR on the hold-out testing images. This implementation surpassed all top-performing methods in the REFUGE challenge by improving OD and OC DSC by 0.2% and 0.4%, respectively, and reducing the VCDR MAE by 9%. Our method provided an accurate and automated solution for OD and OC segmentation and VCDR estimation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call