Abstract

Two experiments were run to determine whether the individual differences in auditory speech processing are predictable from those in speechreading, using a total of 90 normal-hearing subjects. Tests included single words and sentences. The speech was recorded on a video disk by a male actor (Bernstein and Eberhardt, 1986, Johns Hopkins Lipreading Corpus). The auditory speech was presented with a white noise masker, at −7 dB Sp/N. The correlations between overall auditory and visual performance were 0.52 and 0.45, in the two studies, suggesting the existence of a modality-independent ability to perceive linguistic ‘‘wholes’’ on the basis of linguistic fragments. Subjects also identified printed sentences with 40%–60% of the portions of the letters deleted. Performance on that ‘‘visual-fragments’’ test also correlated significantly with visual and auditory speech processing. [Work supported by AFOSR, through a grant to the Institute for the Study of Human Capabilities.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call