Abstract Two perceptual experiments investigated how the suprasegmental information of monosyllables is perceived and exploited in spoken English word recognition by listeners of English and Taiwan Mandarin (TM). Using an auditory lexical decision task in which correctly stressed English words and mis-stressed nonwords (e.g. camPAIGN vs. *CAMpaign) were presented for lexical decisions, Experiment I demonstrated that TM listeners could perceive the differences between stressed and unstressed syllables with native-like accuracy and rapidity. To examine how the perceived suprasegmental contrast would constrain English lexical access, Experiment II was conducted. It used a cross-modal fragment priming task in which a lexical decision had to be made for a visually presented English word or nonword following an auditory prime, which was a spoken word-initial syllable. The results showed that English and TM listeners recognized the displayed word (e.g. campus) faster both after a stress-matching (e.g. CAM-) prime and a stress-mismatching (e.g. cam-) prime than after a control prime (e.g. MOUN-, with mismatching segments). This indicates that suprasegmental information does not inhibit a segmentally matching but suprasegmentally mismatching word candidate for both the two groups, although TM is a language where lexical prosody is expressed syllabically and its listeners tend to interpret lexical stress tonally. Yet, the two groups’ responses were slower after the stressed primes than after the unstressed ones, presumably because the former generally had more possible continuations than the latter do. It is therefore concluded that when recognizing spoken English words, both the native and non-native (TM-speaking) listeners can exploit the suprasegmental cues of monosyllables, which, however, are not so effective that they will outweigh the segmental cues.
Read full abstract