Abstract
Listeners activate speech-sound categories in a gradient way, and this information is maintained and affects activation of items at higher levels of processing (McMurray et al., 2002; Toscano et al., 2010). Recent findings by Kapnoula et al. (2017) suggest that the degree to which listeners maintain within-category information varies across individuals. Here we assessed the consequences of this gradiency for speech perception. To test this, we collected a measure of gradiency for different listeners using the visual analogue scaling (VAS) task used by Kapnoula et al. (2017). We also collected 2 independent measures of performance in speech perception: a visual world paradigm (VWP) task measuring participants' ability to recover from lexical garden paths (McMurray et al., 2009) and a speech-perception task measuring participants' perception of isolated words in noise. Our results show that categorization gradiency does not predict participants' performance in the speech-in-noise task. However, higher gradiency predicted higher likelihood of recovery from temporarily misleading information presented in the VWP task. These results suggest that gradient activation of speech sound categories is helpful when listeners need to reconsider their initial interpretation of the input, making them more efficient in recovering from errors. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Experimental Psychology: Human Perception and Performance
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.