Abstract
Detection thresholds for spoken sentences in steady-state noise are reduced by 1–3 dB when synchronized video images of movements of the lips and other surface features of the face are provided. In a previous report [K. W. Grant and P. F. Seitz, J. Acoust. Soc. Am. 103, 3018 (1998)], we showed that the amount of masked threshold reduction, or bimodal coherence masking protection (BCMP), depended on the degree of correlation between the rms amplitude envelope of the target sentence and the area of lip opening. In the present study, we extend these results by directly manipulating this cross-modality correlation through either bandpass filtering or amplitude adjustments of selected words contained in the target sentences. A control condition was also included in which visual orthography was provided to explicitly identify the target sentence prior to each test trial. Results showed that orthographic information reduced detection thresholds by about 0.5 dB for all target sentences. Preliminary results for filtered and amplitude-adjusted sentences suggest that the magnitude of the BCMP depends primarily on the cross-modality correlation between lip-area function and rms amplitude envelope computed over windows of approximately 300 ms. [Work supported by NIH and the Department of Clinical Investigation, Walter Reed Army Medical Center.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.