Abstract

Four cochlear implant subjects fitted with single‐channel sound processors identified consonants taken from a 23‐consonant set and spoken in a VCV context. The videotaped VCV items were presented in three conditions: lipreading alone, stimulation alone, and stimulation plus lipreading. The subjects had had no training or testing with VCVs prior to the tests reported here. Tests were conducted within the first 0–8 months of the first fitting of the wearable single‐channel processor. A total of 7590 identifications were collected and analyzed. All subjects scored near 30% correct in the lipreading alone condition, but 3%–12% correct (near chance) in the stimulation alone condition. A broader range of scores obtained when stimulation was added to lipreading (30%–55% correct). Errors were classified in terms of three features: voicing, and place and manner of articulation. Detailed scrutiny of the error profiles indicates that beyond improving the overall percent correct identification, the addition of stimulation to lipreading allows subjects to make fewer two‐ and three‐feature confusions, narrowing down the choice of alternatives to those that differ by a single feature. [Work supported by NIH.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call