Abstract

We examined whether language and culture influence speech perception in face-to-face communication. Native speakers of Japanese, Spanish and English identified the same synthetic unimodal and bimodal speech syllables. Five-step /ba/–/da/ continua were synthesized along auditory and visual dimensions, by varying properties of the syllable at its onset. In the first experiment, the three language groups identified the test syllables as /ba/ or /da/; in the second, Japanese and English speakers were given an open-ended set of response alternatives. For all language groups, identification of the speech segments was influenced by both auditory and visual sources of information. Given the results, we were able to reject an auditory dominance model (ADM) which assumes that the contribution of visible speech is dependent on poor-quality audible speech. The results also falsified a categorical model of perception (CMP) in which the auditory and visual sources are categorized before they are combined. The fuzzy logical model of perception (FLMP) provided a good description of performance supporting the claim that multiple sources of continuous information are evaluated and integrated in speech perception. No differences in the nature of processing across language groups suggests that the underlying mechanisms for speech perception are similar across language and culture.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call