Abstract

An essential complication of the diabetes disease is termed as Diabetic Retinopathy (DR), and it leads visual impairment in long-term vision. In general, DR is a kind of eye disease occurred by diabetes mellitus and also it is a general cause of blindness around the world. In this scenario, early detection of DR is more necessary to prevent people from being afflicted by blindness. Classification is difficult since there are so many different fundus imaging options, especially in Proliferative DR, which includes bleeding and the development of new arteries in the retina. The publicly accessible fundus image datasets from Kaggle (EyePACS) are used in this study together with real-time datasets for training and testing a transformer network on the datasets. The proposed model is used to develop an automated transformer network mechanism for classifying DR disease in earlier stages from the given inputs. The most crucial areas, including blood vessels, hemorrhage, soft exudates, and hard exudates from the pre-processed images are segmented using D2_UNet approach to make accurate classification. Finally, a new SE-ResCA-GTNet model is proposed to classify the varied severity of retinopathy disease from the given input images. Whereas, the Gazelle Optimization (GO) algorithm is used to fine-tune the proposed classifier to attain improved accuracy results. Using the fundus image dataset, the proposed SE-ResCA-GTNet classification model obtains accuracy, recall,and f1-measures of 99.8 %, 99.4 %, and 99.3 %. When compared to state-of-the-art methods, the experimental results demonstrate that the proposed method outperforms them.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.