Abstract

Listeners activate speech-sound categories in a gradient way, and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula et al. (2017) suggest that the degree to which listeners maintain within-category information varies across individuals. Here we assessed the consequences of this gradiency for speech perception. To test this, we collected a measure of gradiency for different listeners using the visual analogue scaling (VAS) task used by Kapnoula et al. (2017). We also collected 2 independent measures of performance in speech perception: a visual world paradigm (VWP) task measuring participants' ability to recover from lexical garden paths (McMurray et al., 2009) and a speech-perception task measuring participants' perception of isolated words in noise. Our results show that categorization gradiency does not predict participants' performance in the speech-in-noise task. However, higher gradiency predicted higher likelihood of recovery from temporarily misleading information presented in the VWP task. These results suggest that gradient activation of speech sound categories is helpful when listeners need to reconsider their initial interpretation of the input, making them more efficient in recovering from errors. (PsycInfo Database Record (c) 2021 APA, all rights reserved).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call