Abstract

We assessed the role of audiovisual integration in selective attention by testing selective attention to sound. Participants were asked to focus on one audio speech stream out of two audio streams presented simultaneously at different pitch. We measured recall of words from the cued or the uncued sentence using a 2AFC at the end of each trial. A video-clip of the mouth of a speaker was presented in the middle of the display, matching one of the two simultaneous auditory streams (50% of the time it matched the cued sentence and the rest the uncued one). In Experiment 1 the cue was 75% valid. Recall in the valid trials was better than in the invalid ones. The critical result was, however, that only in the valid condition we did find differences between audio–visual matching and audio-visually mismatching sentences. On the invalid condition these differences were not found. In Experiment 2 the cue to the relevant sentence was 100% valid, and we included a control condition where the lips didn’t match either of the sentences. When the lips matched the cued sentence performance was better than when they matched the uncued sentence or none of them, suggesting a benefit of audiovisual matching rather than a cost of mismatch. Our results indicate that attention to acoustic frequency (pitch) plays an important role in what sounds benefit from multisensory integration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.