Abstract

Theoretical Background: The recognition of a familiar voice is an accomplishment of the human brain that allows us to reliably identify familiar people without seeing them in person. Of note, human voice recognition is a distinct speech process and appears not to be connected to linguistic or semantic language comprehension. Whereas language comprehension and speech functions are regulated by the left hemisphere (Cabeza & Nyberg, 2000), Belin and Zatorre (2003) found a right hemisphere dominance for voice recognition processes. More specifically, the right anterior superior temporal sulcus (STS) and its connections to the prefrontal and medial temporal cortex are involved in the recognition of a familiar voice (von Kriegstein & Giraud, 2004). These findings lead us to the assumption that patients with left hemisphere lesions, and an aphasic syndrome should have no problems in recognizing a familiar voice. Preliminary support comes from a behavioral study by Kneidl (2006), who could demonstrate that the voice recognition performance of aphasic patients with left brain damage is comparable to healthy subjects. Yet to our knowledge, there is still no experimental imaging study on voice recognition processes in aphasic patients with left hemisphere lesions. The aim of the study was twofold. First, we wanted to replicate the finding of a comparable performance of aphasic patients to healthy controls in voice recognition. Second, we wanted to test the assumption of a similar activation in the right STS in aphasic patients compared to healthy control subjects when listening to familiar voices. Method: Subjects were four right-handed patients with left-hemisphere lesions as well as nine right-handed healthy controls matched for age and education. Patients were tested on neuropsycho-logical functions as well as anxiety and depressive symptoms. We used an event-related fMRI-paradigm to measure cerebral activity during identification of familiar and unfamiliar voices. Changes in activity were measured through BOLD-function. Stimuli consisted of 30 mono- and disyllabic words and non-words spoken by one relative of each subject (husband or wife) and four unknown speakers. All stimuli were presented binaurally through earphones in a pseudo-random order. Participants had to respond to a known or unknown voice with a left or right key press using the index or middle finger of the right hand. They received no feedback on the correctness of their response. All stimuli were presented once within a block in random order. Subjects completed four blocks, separated by short rests. Preliminary Results: Error rate was almost three times higher for aphasic patients than for controls (45.9% and 15.4%, U=-2.00, p=.011, two-tailed). There were no differences in omission errors (U=15.5, p=.71) between the two groups. The number of correct responses in the patient group was comparable between words and non-words (Z=.92, p=.36) and familiar and non-familiar-voices (Z=1.10, p=.27). There was an increased activation of the right middle STS (BA 22) for words spoken by a familiar voice compared to an unfamiliar voice. Conclusion: Recognition of a familiar voice involves a network. In contrast to our expectations, aphasic patients showed a decreased ability to recognize a familiar voice.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call