AbstractBackgroundAccurately interpreting a speaker’s nonverbal emotional vocal signals (their prosody) is often essential for successful communication. In the noisy conditions of daily life, this presents the brain with a difficult computational challenge that may be vulnerable to neurodegenerative pathologies. However, processing of acoustically degraded prosody has not been studied in these diseases.MethodHere we addressed this issue in patients representing all major syndromes of primary progressive aphasia (N = 33) and Alzheimer’s disease (N = 18) versus healthy age‐matched controls (N = 24). As a model paradigm for the degraded ‘noisy’ speech signals of everyday life, we used noise‐vocoding: digital division of the speech signal into a variable number of frequency channels constituted from amplitude‐modulated white noise, such that fewer channels convey less spectrotemporal detail and reduce intelligibility. We assessed the impact of noise‐vocoding on recognition of three canonical prosodic emotions (anger, surprise, sadness) at three noise‐vocoding levels. Correlations were also conducted with perception of degraded emotional prosody and measures of social cognition.ResultCompared with health controls, all patient groups were impaired in the identification of ‘clear’ emotional prosody. This impairment in all groups was sustained when the emotional prosody stimuli were vocoded, with differences seen between 18 and six channels, and 12 and six channels. Cost analyses conducted showed a significantly larger cost of noise‐vocoding in lvPPA than in healthy controls. Accurate identification of degraded emotional prosody in patients was significantly correlated with measures of social cognition.ConclusionOur findings open a window on a dimension of real‐world emotional communication that has often been relatively overlooked in dementia, and suggest a novel candidate paradigm for investigating and quantifying this systematically.
Read full abstract