Abstract

For the fast and accurate cognition of external information, the human brain seems to integrate information from multi-sensory modalities. We used positron emission tomography (PET) to identify the brain areas related to auditory–visual speech perception. We measured the regional cerebral blood flow (rCBF) of young, normal volunteers during the presentation of dynamic facial movement at vocalization and during a visual control condition (visual noise), both under the two different auditory conditions of normal and degraded speech sounds. The subjects were instructed to listen carefully to the presented speech sound while keeping their eyes open and to say what they heard. The PET data showed that elevation of rCBF in the right fusiform gyrus (known as the “face area”) was not significant when the subjects listened to normal speech sound accompanied by a dynamic image of the speaker's face, but was significant when degraded speech sound (filtered with a 500 Hz low-pass filter) was presented with the facial image. The results of the present study confirm the possible involvement of the fusiform face area (FFA) in auditory–visual speech perception, especially when auditory information is degraded, and suggest that visual information is interactively recruited to make up for insufficient auditory information.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call