Abstract

A long-term training paradigm in lipreading was used to test the fuzzy logical model of perception (FLMP). This model has been used successfully to describe the joint contribution of audible and visible speech in bimodal speech perception. Tests of the model were extended in the present experiment to include the prediction of confusion matrices, as well as performance at several different levels of skill. The predictions of the FLMP were contrasted with the predictions of a prelabeling integration model (PRLM). Subjects were taught to lipread 22 initial consonants in three different vowel contexts. Training involved a variety of discrimination and identification lessons with the consonant-vowel syllables. Repeated testing was given on syllables, words, and sentences. The test items were presented visually, auditorily, and bimodally, at normal rate or three times normal rate. The subjects improved in their lipreading ability across all three types of test items. Replicating previous results, the present study illustrates that substantial gains in lipreading performance are possible. Relative to the PRLM, the FLMP gave a better description of the confusion matrices at both the beginning and the end of practice. One new finding from the present study is that the FLMP can account for the gains in bimodal speech perception as subjects improve their lipreading and listening abilities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call