Abstract

Alzheimer’s disease is one of the leading causes of death in the world. Alzheimer’s is typically diagnosed through expensive imaging methods, such as positron emission tomography (PET) scan and magnetic resonance imaging (MRI), as well as invasive methods, such as cerebrospinal fluid analysis. In this study, we develop an interpretable hierarchical deep learning model to detect the presence of Alzheimer’s disease from transcripts of interviews of individuals who were asked to describe a picture. Our deep recurrent neural network employs a novel three-level hierarchical attention over self-attention (AoS3) mechanism to model the temporal dependencies of longitudinal data. We demonstrate the interpretability of the model with the importance score of words, sentences, and transcripts extracted from our AoS3 model. Numerical results demonstrate that our deep learning model can detect Alzheimer’s disease from the transcripts of patient interviews with 96% accuracy when tested on the DementiaBank data set. Our interpretable neural network model can help diagnose Alzheimer’s disease in a noninvasive and affordable manner, improve patient outcomes, and result in cost containment. History: Rema Padman served as the senior editor for this article. Data Ethics & Reproducibility Note: The code capsule is available on Code Ocean at https://codeocean.com/capsule/2881658/tree/v1 and in the e-Companion to this article (available at https://doi.org/10.1287/ijds.2020.0005 ). The study involves secondary use of already-collected data. None of the authors were part of the original study team. The authors had no interaction with living individuals and had no access to protected health information (PHI) or private identifiable information about living individuals.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call