Abstract

We investigated the perceptual processing of facial information and vocal information to test the nature of the multisensory integration process. Single-syllable words were presented in background noise at one of eight stimulus durations ranging from 45 to 80% of the total word duration. Performance was evaluated relative to a fuzzy logical model of perception (FLMP) and an additive model (ADD). These two models differ in the nature of the integration process: optimal multiplicative integration in the FLMP and additive in the ADD. The FLMP provided a significantly better description of the word identifications relative to the ADD. Consistent with the outcomes of several other tasks in audiovisual speech perception, the FLMP also accurately describes the continuous uptake of information during word perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call