Abstract

Forty-six hearing-impaired young adults were tested with a newly developed instrument that requires a discrimination response to assess viseme perception as a component of lipreading performance. Stimuli were videotaped sentences that differed on half of the trials from a captioned target sentence by one viseme embedded in the middle of the sentence. Discrimination within six visual categories was tested: gross syllable pattern, consonant articulation--lips, consonant articulation--tongue, vowel articulation--extreme lip shapes, vowel articulation--graded lip shapes, and vowel articulation--jaw movement. Test data were analyzed using an item response theory model. The results indicated that the test data conformed to the expectations of the Rasch model for person measurement. Relationships among subjects' test scores and communication characteristics also were examined. The data provide evidence that the test protocol, at this early stage of development, is useful for assessing at least one perceptual component of lipreading performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.