Irreversible vision loss is a common consequence of glaucoma, demands accurate and timely diagnosis for effective management. This research aims to enhance glaucoma classification accuracy by fusing information from two distinct imaging modalities like using deep learning explores the fusion of these modalities through an innovative neural network architecture with optimization. This approach combines Deep Stochastic Variational Autoencoder Convolution Neural Networks (DSVAECNN) and Adam optimization techniques to enable robust and accurate classification of glaucoma. A multi-path architecture is designed to accommodate both imaging as optical coherence tomography (OCT) radiomics features and fundus morphological features simultaneously. To ensure the effectiveness of the model, this study investigates the implementation of advanced optimization algorithm such as Adam, to expedite convergence and alleviate the risk of over fitting. The resulting model demonstrates improved generalization capabilities, critically for accurate diagnosis across diverse patient populations. A comprehensive OCT and fundus images dataset is used to evaluate the proposed approach from a representative cohort of glaucoma patients and healthy individuals. Quantitative various metrics including sensitivity, accuracy and specificity are employed toward appraise the accomplishment of the fusion-based classification model. Comparisons with current techniques show the dominance of the proposed move toward in accurately detecting glaucoma cases. This research provides the advancement of glaucoma diagnosis by effectively harnessing the synergy between fundus image and OCT scans and findings helpful information for clinics, ultimately facilitating early detection and personalized management of glaucoma, thus preserving vision for affected individuals.
Read full abstract