Abstract
ABSTRACT Recognition of speech in noise is facilitated when spoken sentences are repeated a few minutes later, but the levels of representation involved in this effect have not been specified. Three experiments tested whether the effect would transfer across modalities and languages. In Experiment 1, participants listened to sets of high- and low-constraint sentences and read other sets in an encoding phase. At test, these sentences and new sentences were presented in noise, and participants attempted to report the final word of each sentence. Recognition was more accurate for repeated than for new sentences in both modalities. Experiment 2 was identical except for the implementation of an articulatory suppression task at encoding to reduce phonological recoding during reading. The cross-modal repetition priming effect persisted but was weaker than when the modality was the same at encoding and test. Experiment 3 showed that the repetition priming effect did not transfer across languages in bilinguals. Taken together, the results indicate that the facilitated recognition of repeated speech is based on a combination of modality-specific processes at the phonological word form level and modality-general processes at the lemma level of lexical representation, but the semantic level of representation is not involved.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.