Abstract

Diabetic retinopathy (DR) is a prevalent ocular condition and a prominent contributor to vision impairment in individuals with diabetes. Consistent monitoring through fundus photography and prompt intervention represents the most efficient strategy for controlling this ailment. Given the substantial diabetic patient population and their extensive screening needs, there is a growing inclination towards harnessing computer-aided and entirely automated methods for diagnosing DR. Over the past years, deep neural networks have achieved remarkable progress across a wide range of applications. Consequently, automating the diagnosis of DR and delivering tailored recommendations to DR patients underscores the significance of accurate and intricate DR classification. In this work, we have present a cross modality feature fusion based framework for diabetic retinopathy (DR) image classification. Here, cross modality means RGB image and it’s green channel. Initially, we have present multi-scale multi- receptive feature extraction block to learn the local and global features from both the modalities. Further, the learned features at various scale are fused effectively with present multi-level feature fusion block for image classification task. We evaluated our present framework on MESSIDOR and IDRID database by comparing it to state-of-the-art (SOTA) deep learning frame- works for DR image classification. The result analysis clearly demonstrate that the present cross-modality feature fusion based classification framework outperforms existing SOTA frameworks in terms of various evaluation parameters.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call