Abstract

As the most common neurodegenerative disease among older adults, Alzheimer's disease (AD) would lead to loss of memory, impaired language and judgment, gait disorders, and other cognitive deficits severe enough to interfere with daily activities and significantly diminish quality of life. Recent research has shown promising results in automatic AD diagnosis via speech, leveraging the advances of deep learning in the audio domain. However, most existing studies rely on a centralized learning framework which requires subjects' voice data to be gathered to a central server, raising severe privacy concerns. To resolve this, in this paper, we propose the first federated-learning-based approach for achieving automatic AD diagnosis via spontaneous speech analysis while ensuring the subjects' data privacy. Extensive experiments under various federated learning settings on the ADReSS challenge dataset show that the proposed model can achieve high accuracy for AD detection while achieving privacy preservation. To ensure fairness of the model performance across clients in federated settings, we further deploy fair aggregation mechanisms, particularly q-FEDAvg and q-FEDSgd, which greatly reduces the algorithmic biases due to the data heterogeneity among the clients. Clinical Relevance -The experiments were conducted on publicly available clinical datasets. No humans or animals were involved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call