Abstract

Diabetic-caused retinopathy is a disease that damages the blood vessels that are present in the part of retina of the eyes and might lead to permanent blindness in people who have diabetes. Ophthalmologists use the fundus images of the eyes, which are the patients identify this disease. However, manual detection of this abnormality is very time taking, cost-consuming, and has a higher probability of resulting in errors. Thus, to prevent this, a Deep Learning model is proposed to detect Diabetic caused retinopathy by using high-resolution fundus images which as a result decreases the chances of misdiagnosis. Initially, fundus images are pre-processed by using the Contrast limited adaptive histogram equalization (CLAHE) model and histogram-based segmentation model. In the next step, features from the pre-processed images are extracted. However, it becomes very difficult for the individual models to identify the behind features that are complicated and those can only classify Diabetic Retinopathy's various stages at low accuracy. Thus, to resolve this an ensemble of three deep Convolution Neural Network (CNN) models (Inceptionv3, Xception, and DenseNet121) is developed which will encode the rich features and improve the classification for different stages of DR like normal, mild, moderate, severe and PDR (Proliferative Diabetic Retinopathy). The results show that the proposed ensemble model detects all the Diabetic Retinopathy stages, not like the individual models and performs better comparatively. This improvement can thus reduce time, effort, and labor in healthcare services and will also increase the final decision accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call