Abstract

A technique for the simulation of deafness, and its alleviation by a new " frequency transposition " device are described. The effects of visual cues (provided by the experimenter's articulatory movements), and of 'recoded' auditory information provided by the " frequency transposer "), on subjects' ability to make a 'judged match' imitation response to spoken CVC nonsense syllables, were evaluated. The improved imitation when visual cues were provided, of both " manner " of articulation (p < 0.01), and " place " of articulation (p < 0.001) of certain fricative, sibilant and stop consonants, together with a highly significant improvement in overall imitation scores (p << 0.0005), supported the conclusion that normal hearing subjects, although untrained in lipreading, had a well developed ability to ' integrate' auditory and visual speech information. Further, the improved imitation of both " manner " of articulation (p < 0.025), and " place " of articulation (p < 0.025), of certain fricative, sibilant and stop consonants brought about by ' recoding ' the speech signal (without any experimentally guided articulation or discrimination training) supported the hypothesis that the recoding technique would produce a signal which was sufficiently ' speech-like' to be utilised with effect by the ear-brain system, and may therefore be of use in the rehabilitation of the " perceptively " deaf.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call