Abstract

AbstractBackgroundAlzheimer’s disease (AD) results in cognitive dysfunction among older people, making their lives nearly impossible. Early intervention is likely to be the most effective way to slow its progression. Machine learning models are critical to determine whether a patient is demented or not at an early stage. But the most highly accurate models are non‐linear and less transparent, so the logic behind the predictions are not revealed. In this study, a neural network model was created, and two model‐agnostic methods were applied on it to understand the important features the model used to make predictions.MethodA total of 25 features were selected from Oasis‐2 dataset of 150 participants. A feature selection is performed using KRUSKAL algorithm and 10 potential features were selected. A feed forward neural network model with sigmoid hidden neurons and softmax output neurons was created for classification of demented and non‐demented cases. Then, two model agnostic methods, LIME (Local Interpretable Model‐Agnostic Explanations) and SHAP (SHapley Additive exPlanations) were applied on the model to identify the features contributing to the prediction. LIME was used to generate a human‐readable explanation for a single prediction by approximating the behavior of the model locally around the instance being predicted. SHAP is a method based on the concept of Shapley values from cooperative game theory. It assigns a contribution value to each feature of the model, indicating how much that feature contributes to the final predictionResultNeural network resulted in an accuracy of 91.4%. It is found that Clinical Dementia Rating (CDR) is the most significant feature, followed by the normalized brain volume (nWBV). CDR is also observed to positively influence the detection of dementia cases, whereas the nWBV has a negative impact, which is also true clinically. Additionally, the contribution of each feature to the prediction is shown, which may assist in the evaluation of the model’s trustworthiness.ConclusionThe goal of this research is to create a neural network model that is highly accurate and highly interpretable so that any non‐technical stakeholder can understand the reasoning behind the predictions. So, accuracy combined with interpretability increases the reliability of AD prediction systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call