Abstract

Everyday speech is rife with errors and disfluencies, yet processing what we hear usually feels effortless. How does the language comprehension system accomplish such an impressive feat? The current experiment tests the hypothesis that listeners draw on relevant contextual and linguistic cues to anticipate speech errors and mentally correct them, even before receiving an explicit correction from the speaker. In the current visual-world eye-tracking experiment, we monitored participants' eye movements to objects in a display while they listened to utterances containing reparandum-repair speech errors (e.g., . . . his cat, uh I mean his dog . . .). The contextual plausibility of the misspoken word and the certainty with which the speaker uttered this word were systematically manipulated. Results showed that listeners immediately exploited these cues to generate top-down expectations regarding the speaker's communicative intention. Crucially, listeners used these expectations to constrain the bottom-up speech input and mentally correct perceived speech errors, even before the speaker initiated the correction. The results provide powerful evidence regarding the joint process of correcting speech errors that involves both the speaker and the listener. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.