Abstract

The long‐range goal of this research is to understand the visual phonetic and cognitive/linguistic processes underlying the lipreading of sentences. Bernstein et al. [J. Acoust. Soc. Am. Suppl. 1 85, S59 (1989)] described development of a sequence comparison system that produces a putative alignment of stimulus and response phonemes for lipread sentences. Such alignments permit sentences to be scored at the phonemic level and also permit examination of the types of errors that occur. In this study the sequence comparator was applied to a database containing responses of 139 normal‐hearing subjects who viewed the 100 CID everyday sentences [Davis and Silverman, 1970], spoken by a male or a female talker. Analysis of the alignments was made possible by the development of a powerful parsing program that tabulates the frequency of user‐specified stimulus or response patterns and generates confusion matrices for selected portions of these patterns. To examine the impact of sentence environment, vowel and consonant confusion matrices derived from the sentences were compared to those obtained from nonsense syllables. To probe for context effects, performance on individual sentences was examined as a function of sentence, word, and syllable characteristics. [Work supported by NIH.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call