Alzheimer’s Disease (AD) causes slow death in brain cells due to shrinkage of brain cells which is more prevalent in older people. In most cases, the symptoms of AD are mistaken as age-related stresses. The most widely utilized method to detect AD is Magnetic Resonance Imaging (MRI). Along with Artificial Intelligence (AI) techniques, the efficacy of identifying diseases related to the brain has become easier. But, the identical phenotype makes it challenging to identify the disease from the neuro-images. Hence, a deep learning method to detect AD at the beginning stage is suggested in this work. The newly implemented “Enhanced Residual Attention with Bi-directional Long Short-Term Memory (Bi-LSTM) (ERABi-LNet)” is used in the detection phase to identify the AD from the MRI images. This model is used for enhancing the performance of the Alzheimer’s detection in scale of 2–5%, minimizing the error rates, increasing the balance of the model, so that the multi-class problems are supported. At first, MRI images are given to “Residual Attention Network (RAN)”, which is specially developed with three convolutional layers, namely atrous, dilated and Depth-Wise Separable (DWS), to obtain the relevant attributes. The most appropriate attributes are determined by these layers, and subjected to target-based fusion. Then the fused attributes are fed into the “Attention-based Bi-LSTM”. The final outcome is obtained from this unit. The detection efficiency based on median is 26.37% and accuracy is 97.367% obtained by tuning the parameters in the ERABi-LNet with the help of Modified Search and Rescue Operations (MCDMR-SRO). The obtained results are compared with ROA-ERABi-LNet, EOO-ERABi-LNet, GTBO-ERABi-LNet and SRO-ERABi-LNet respectively. The ERABi_LNet thus provides enhanced accuracy and other performance metrics compared to such deep learning models. The proposed method has the better sensitivity, specificity, F1-Score and False Positive Rate compared with all the above mentioned competing models with values such as 97.49%.97.84%,97.74% and 2.616 respective;y. This ensures that the model has better learning capabilities and provides lesser false positives with balanced prediction.
Read full abstract