Abstract

Comparison of deep learning results from various studies for glaucoma diagnosis is essentially meaningless since private data sets are often used. Another challenge is overfitting of the deep learning models with relatively small public datasets. This overfitting leads to poor generalization. Here, we propose a practical approach for fine tuning an existing state-of-the art deep learning model, namely, the Inception-v3 for glaucoma detection.. A two pronged approach using a transfer learning methodology combined with data augmentation and normalization is proposed herein. We used a publicly available dataset, RIM-ONE which has 624 monocular and 159 stereoscopic retinal fundus images. Data augmentation operations mimicking the natural deformations in fundus images along with Contrast Limited Adaptive Histogram Equalization (CLAHE) and normalization were applied to the images. The weights of Inception-v3 network were pretrained on the ImageNet dataset which consists of real-world objects. We finetuned this network for the RIM-ONE dataset to get the deep features required for glaucoma detection without overfitting. Even though we used a small dataset, the results obtained from this network are comparable to that reported in the literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call