Abstract

Abstract Recognizing spoken words in one's first language (L1) is usually effortless, but the same task can be much more demanding when listening to a second language (L2). In order to decode the message of a speaker, listeners must recognize individual words in the speaker's utterance. Spoken word recognition involves two central processes: (a) multiple word activation and competition and (b) segmentation of the continuous speech stream. One major challenge for bilingual listeners (here, anyone mastering multiple languages, whether acquired from birth or later in life) is that more words compete for recognition, in particular when listening to their L2. For bilingual listeners, the set of potential word candidates is multiplied with parallel activation of words from the wrong language as well as words from the language they actually heard—words that monolingual listeners would either not consider or would deactivate much faster. Another challenge is that L2 listeners are less efficient than native listeners in segmenting the continuous speech stream into individual words. For L1 listeners, the task of segmentation is facilitated by numerous indications to word boundaries, including lexical subtraction, prosodic cues, phonotactic constraints, and phonetic detail. Although L2 listeners can exploit these cues to some extent, they often cannot do so as successfully as L1 listeners. This entry describes the challenges of bilingual spoken word recognition in light of the underlying cognitive processes, opportunities to overcome those challenges, and the benefits of having another language at one's disposal during bilingual spoken word recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call