Alzheimer's disease (AD) is a neurodegenerative disorder with an irreversible progression. Currently, it is diagnosed using invasive and costly methods, such as cerebrospinal fluid analysis, neuroimaging, and neuropsychological assessments. Recent studies indicate that certain changes in language ability can predict early cognitive decline, highlighting the potential of speech analysis in AD recognition. Based on this premise, this study proposes an AD recognition multi-channel network framework, which is referred to as the ADNet. It integrates both time-domain and frequency-domain features of speech signals, using waveform images and log-Mel spectrograms derived from raw speech as data sources. The framework employs inverted residual blocks to enhance the learning of low-level time-domain features and uses gated multi-information units to effectively combine local and global frequency-domain features. The study tests it on a dataset from the Shanghai cognitive screening (SCS) digital neuropsychological assessment. The results show that the method we proposed outperforms existing speech-based methods, achieving an accuracy of 88.57%, a precision of 88.67%, and a recall of 88.64%. This study demonstrates that the proposed framework can effectively distinguish between the AD and normal controls, and it may be useful for developing early recognition tools for AD.
Read full abstract