Abstract

With the increase in the aging population of many countries, the prevalence of neovascular age-related macular degeneration (nAMD) is expected to increase. Morphological parameters such as intraretinal fluid (IRF), subretinal fluid (SRF), subretinal hyperreflective material (SHRM), and pigment epithelium detachment (PED) of spectral-domain optical coherence tomography (SD-OCT) images are vital markers for proper treatment of nAMD, especially to get the information of treatment response to determine the proper treatment interval and switching of anti-vascular endothelial growth factor (VEGF) agents. For the precise evaluation of the change in nAMD lesions and patient-specific treatment, quantitative evaluation of the lesions in the OCT volume scans is necessary. However, manual segmentation requires many resources, and the number of studies of automatic segmentation is increasing rapidly. Improving automated segmentation performance in SD-OCT visual results requires long-range contextual inference of spatial information between retinal lesions and layers. This paper proposes a GAGUNet (graph convolution network (GCN)-assisted attention-guided UNet) model with a novel global reasoning module considering these points. The dataset used in the main experiment of this study underwent rigorous review by a retinal specialist from Konkuk University Hospital in Korea, contributing to both data preprocessing and validation to ensure a qualitative assessment. We conducted experiments on the RETOUCH dataset as well to demonstrate the scalability of the proposed model. Overall, our model demonstrates superior performance over the baseline models in both quantitative and qualitative evaluations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.