Abstract

Myopia, a prevalent vision disorder with potential complications if untreated, requires early and accurate detection for effective treatment. However, traditional diagnostic methods often lack trustworthiness and explainability, leading to biases and mistrust. This study presents a four-phase methodology to develop a robust myopia detection system. In the initial phase, the dataset containing training and testing images is located, preprocessed, and balanced. Subsequently, two models are deployed: a pre-trained VGG16 model renowned for image classification tasks, and a sequential CNN with convolution layers. Performance evaluation metrics such as accuracy, recall, F1-Score, sensitivity, and logloss are utilized to assess the models' effectiveness. The third phase integrates explainability, trustworthiness, and transparency through the application of Explainable Artificial Intelligence (XAI) techniques. Specifically, Local Interpretable Model-Agnostic Explanations (LIME) are employed to provide insights into the decision-making process of the deep learning model, offering explanations for the classification of images as myopic or normal. In the final phase, a user interface is implemented for the myopia detection and XAI model, bringing together the aforementioned phases. The outcomes of this study contribute to the advancement of objective and explainable diagnostic methods in the field of myopia detection. Notably, the VGG16 model achieves an impressive accuracy of 96%, highlighting its efficacy in diagnosing myopia. The LIME results provide valuable interpretations for myopia cases. The proposed methodology enhances transparency, interpretability, and trust in the myopia detection process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call