Abstract

AbstractBackgroundAccurately interpreting a speaker’s nonverbal emotional vocal signals (their prosody) is often essential for successful communication. In the noisy conditions of daily life, this presents the brain with a difficult computational challenge that may be vulnerable to neurodegenerative pathologies. However, processing of acoustically degraded prosody has not been studied in these diseases.MethodHere we addressed this issue in patients representing all major syndromes of primary progressive aphasia (N = 33) and Alzheimer’s disease (N = 18) versus healthy age‐matched controls (N = 24). As a model paradigm for the degraded ‘noisy’ speech signals of everyday life, we used noise‐vocoding: digital division of the speech signal into a variable number of frequency channels constituted from amplitude‐modulated white noise, such that fewer channels convey less spectrotemporal detail and reduce intelligibility. We assessed the impact of noise‐vocoding on recognition of three canonical prosodic emotions (anger, surprise, sadness) at three noise‐vocoding levels.ResultCompared with healthy controls, all patient groups were impaired recognising prosodic emotions (particularly anger) in clear speech, while patients with Alzheimer’s disease showed a disproportionate cost for recognition of the noise‐vocoded emotion of sadness.ConclusionOur findings open a window on a dimension of emotional communication that has been relatively overlooked in dementia, and suggest a novel candidate paradigm for investigating and quantifying this systematically.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call