Abstract

Diabetes mellitus causes diabetic retinopathy (DR), which is the primary cause of blindness worldwide. Lesion Segmentation is a crucial component in the initial diagnosis of DR. The differences in sizes and morphologies of lesions make manual identification extremely difficult and time-consuming. For the treatment and early diagnosis of DR, retinal lesions automatically segmented using a deep learning (DL) approach that has a significant impact in identifying various abnormalities from retinal fundus pictures and also accurately highlighting the locations of related lesions. Using current techniques, such as U-Nets, to perform segmentation on fundus images is highly difficult because of the small-size and high-resolution lesion patches. The challenge may be made simpler by down-sampling the input images, but detailed information is lost in the process. In order to achieve fine-scale segmentation, patch-level analysis is carried out, although this method frequently results in misunderstandings due to a lack of context. This study examines numerous components, including the datasets that are frequently used by researchers and the preprocessing techniques that are implemented to enhance model performance. In-depth investigation of the application of contemporary DL techniques at various stages of DR detection based on segmentation of retinal lesions in fundus pictures is provided in this review article. Here, different retinal image datasets that can be applied to DL are outlined. Further, the specific models and their findings are described in terms of specificity, sensitivity and accuracy with the commonly used datasets. Finally, some relevant findings and future research objectives are presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.