Abstract
Norrix and Green [J. Acoust. Soc. Am. 99, 2591–2592 (1996)] provided evidence for cross-modal context effects on the perception of /r/ and /l/ in a stop cluster. Tokens from a synthetic /iri–ili/ continuum were dubbed onto a visual /ibi/. When presented in an auditory-visual (AV) condition, the tokens were perceived as ranging from /ibri/ to /ibli/. Results indicated a reliable shift in the AV condition relative to an auditory-only (AO) condition. This shift was in accord with acoustic consequences of articulating /r/ and /l/ in a stop cluster. In the current study, sine-wave analogs of the /iri–ili/ tokens were constructed and presented to two groups of observers in an AO and AV condition. Group One was told they would hear schematic speech sounds and instructed to identify what they heard as /r/ or /l/. Group Two made up their own criteria for classifying the tokens as nonspeech sounds. Results indicated a reliable shift in the /r–l/ boundary between the AO and AV conditions for the speech group only and suggest that the influence of the visual articulatory context depends upon listeners interpreting the auditory tokens as speech. [Work supported by NIDCD, NIH.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.