Abstract

The identification of isolated words in speech-reading environments is extremely prone to error. Many of these errors are due to the impoverished nature of spoken stimuli when the only perceptual information available is visual; some estimates place the number of visually discriminable segments at just over 25% of the number discriminable in auditory-only environments. Previous research has shown that the confusion of lipreaders is patterned with respect to the perceptual discriminability of phonetic segments in visual-only environments. In addition, other sources of information such as phonotactic, lexical, and semantic constraints can play a role in speech-reading performance. The current study examined the speech-reading responses generated by 200 normal hearing participants observing 300 isolated English words spoken by 10 talkers each. The responses were analyzed to determine if errors were random or patterned in a way that belies the use of partial information and lexical constraints during the process of visual-only spoken word recognition.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.