Abstract
The visual system processes images in terms of spatial frequency-tuned channels. However, it is not clear how complex object and motion processing are influenced by this early visual processing. In two studies this question was explored in audiovisual speech perception. Subjects were presented with spatial frequency filtered images of the moving face during a speech in noise task. A wavelet procedure was used to create five bandpass filtered stimulus sets. The CID Everyday sentences were presented with a multivoice babble noise signal and key word identification accuracy was scored. Performance varied across the filter bands with peak accuracy being observed for the images containing spatial frequencies spanning 7–14 cycles/face. Accuracy for higher and lower spatial frequency bands was found to be lower. When viewing distance was manipulated no change in the overall shape or peak in the key word accuracy function was observed. However, at the longest viewing distance the performance in the highest spatial frequency band decreased markedly. The results will be discussed in terms of visual information processing constraints on audiovisual integration.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.