Abstract

Prior research has demonstrated that listeners are sensitive to changes in the indexical (talker-specific) characteristics of speech input, suggesting that these signal-intrinsic features are integrally encoded in memory for spoken words. Given that listeners frequently must contend with concurrent environmental noise, to what extent do they also encode signal-extrinsic details? Native English listeners’ explicit memory for spoken English monosyllabic and disyllabic words was assessed as a function of consistency versus variation in the talker’s voice (talker condition) and background noise (noise condition) using a delayed recognition memory paradigm. The speech and noise signals were spectrally-separated, such that changes in a simultaneously presented non-speech signal (background noise) from exposure to test would not be accompanied by concomitant changes in the target speech signal. The results revealed that listeners can encode both signal-intrinsic talker and signal-extrinsic noise information into integrated cognitive representations, critically even when the two auditory streams are spectrally non-overlapping. However, the extent to which extra-linguistic episodic information is encoded alongside linguistic information appears to be modulated by syllabic characteristics, with specificity effects found only for monosyllabic items. These findings suggest that encoding and retrieval of episodic information during spoken word processing may be modulated by lexical characteristics.

Highlights

  • For successful spoken word recognition to take place, listeners must match the incoming auditory input to the appropriate lexical representation stored in memory

  • No participant scored below 90% accuracy on the word identification task, and as such, all participants were included in the analysis of the recognition memory task

  • The present study investigated the extent to which talker identity and noise information are integrally encoded with linguistic information in memory

Read more

Summary

Introduction

For successful spoken word recognition to take place, listeners must match the incoming auditory input to the appropriate lexical representation stored in memory. This is a complex process, as individual instances of a given word vary as a result of changes in talker, speaking style, or a whole host of other linguistic, paralinguistic, and situationspecific characteristics. Previous research has posited that one way listeners could handle this variability is by encoding the idiosyncratic characteristics of a particular speech event into memory and retrieving such rich representations for processing of subsequent speech events with the same or similar instance-specific details (e.g., Goldinger, 1998). As a step towards delimiting which perceptual dimensions external to the speech signal listeners are encoding into memory, Art. 29, page 2 of 15

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.