Abstract

Glaucoma is a serious ocular disorder for which the screening and diagnosis are carried out by the examination of the optic nerve head (ONH). The color fundus image (CFI) is the most common modality used for ocular screening. In CFI, the central region which is the optic disc and the optic cup region within the disc are examined to determine one of the important cues for glaucoma diagnosis called the optic cup-to-disc ratio (CDR). CDR calculation requires accurate segmentation of optic disc and cup. Another important cue for glaucoma progression is the variation of depth in ONH region. In this paper, we first propose a deep learning framework to estimate depth from a single fundus image. For the case of monocular retinal depth estimation, we are also plagued by the labeled data insufficiency. To overcome this problem we adopt the technique of pretraining the deep network where, instead of using a denoising autoencoder, we propose a new pretraining scheme called pseudo-depth reconstruction, which serves as a proxy task for retinal depth estimation. Empirically, we show pseudo-depth reconstruction to be a better proxy task than denoising. Our results outperform the existing techniques for depth estimation on the INSPIRE dataset. To extend the use of depth map for optic disc and cup segmentation, we propose a novel fully convolutional guided network, where, along with the color fundus image the network uses the depth map as a guide. We propose a convolutional block called multimodal feature extraction block to extract and fuse the features of the color image and the guide image. We extensively evaluate the proposed segmentation scheme on three datasets- ORIGA, RIMONEr3, and DRISHTI-GS. The performance of the method is comparable and in many cases, outperforms the most recent state of the art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.