Abstract

Second language (L2) speech perception can be a challenging process, as listeners have to cope with imperfect auditory signals and imperfect L2 knowledge. However, the aim of L2 speech perception is to extract linguistic meaning and enable communication between interlocutors in the language of input. Normal-hearing listeners can effortlessly perceive and understand the auditory message(s) conveyed, regardless of distortions and background noise, as they can endure a dramatic decrease in the amount of spectral and temporal information present in the auditory signal. In their attempt to recognise speech, listeners can be substantially assisted by looking at the face of the speaker. Visual perception is important even in the case of intelligible speech sounds, indicating that auditory and visual information should be combined. The present study examines how audio-visual integration affects Cypriot-Greek (CG) listeners’ recognition performance of plosive consonants on word-level in L2 English. The participants were 14 first language (L1) CG users, who were non-native speakers of English. They completed a perceptual minimal-set task requiring the extraction of speech information from unimodal auditory stimuli, unimodal visual stimuli, bimodal audio-visual congruent stimuli, and incongruent stimuli. The findings indicated that overall performance was better in the bimodal congruent task. The results point to the multisensory speech-specific mode of perception, which plays an important role in alleviating the majority of moderate to severe L2 comprehension difficulties. CG listeners’ success seems to depend upon the ability to relate what they see to what they hear.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call