Abstract

The identification of isolated words in speech-reading environments is extremely prone to error. Many of these errors are due to the impoverished nature of spoken stimuli when the only perceptual information available is visual; some estimates place the number of visually discriminable segments at just over 25% of the number discriminable in auditory-only environments. Previous research has shown that the confusion of lipreaders is patterned with respect to the perceptual discriminability of phonetic segments in visual-only environments. In addition, other sources of information such as phonotactic, lexical, and semantic constraints can play a role in speech-reading performance. The current study examined the speech-reading responses generated by 200 normal hearing participants observing 300 isolated English words spoken by 10 talkers each. The responses were analyzed to determine if errors were random or patterned in a way that belies the use of partial information and lexical constraints during the process of visual-only spoken word recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call