Abstract

Diabetic retinopathy (DR) is a chronic eye condition that is rapidly growing due to the prevalence of diabetes. There are challenges such as the dearth of ophthalmologists, healthcare resources, and facilities that are unable to provide patients with appropriate eye screening services. As a result, deep learning (DL) has the potential to play a critical role as a powerful automated diagnostic tool in the field of ophthalmology, particularly in the early detection of DR when compared to traditional detection techniques. The DL models are known as black boxes, despite the fact that they are widely adopted. They make no attempt to explain how the model learns representations or why it makes a particular prediction. Due to the black box design architecture, DL methods make it difficult for intended end-users like ophthalmologists to grasp how the models function, preventing model acceptance for clinical usage. Recently, several studies on the interpretability of DL methods used in DR-related tasks such as DR classification and segmentation have been published. The goal of this paper is to provide a detailed overview of interpretability strategies used in DR-related tasks. This paper also includes the authors’ insights and future directions in the field of DR to help the research community overcome research problems.Graphical abstract

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call