Abstract

Diabetic retinopathy (DR) is one of the main causes of loss of vision and blindness in humans across the world. DR is usually found in patients suffering from diabetes for a long period. Automation of DR diagnosis rescues many people from going blind by identifying the disease at the early stages. In this work, we introduce a robust model for DR severity level prediction by leveraging features extracted from pre-trained models to represent DR images. The activation filter values from multiple convolution blocks of VGG-16 are extracted and aggregated using pooling and fusion methods. The aggregation module produces a compact, informative, and discriminative representation of the retinal images by removing noisy and redundant features using pooling and fusion approaches. These feature representations are fed to the proposed DNN architecture to identify the severity level of DR. On the benchmark Kaggle APTOS 2019 contest dataset, our proposed method sets a new state-of-the-art result with an accuracy of 84.31% and an AUC 97. Experimental studies reveal that the proposed model exhibits superior performance compared with the existing models, especially in the case of severe and proliferate stage DR images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call