Abstract

Over the course of language development, infants learn native speech categories and word boundaries from speech input. Although speech category learning and word segmentation learning occur in parallel, most investigations have focused on one, assuming somewhat mature develop of the other. To investigate the extent to which listeners can simultaneously solve the categorization and segmentation learning challenges, we created an artificial, non‐linguistic stimulus space that modeled the acoustic complexities of natural speech by recording a single talker’s multiple utterances of a set of sentences containing four keywords. There was acoustic variability across utterances, presenting a categorization challenge. The keywords were embedded in continuous speech, presenting a segmentation challenge. Sentences were spectrally rotated, rendering them wholly unintelligible, and presented within a video‐game training paradigm that does not rely upon explicit feedback and yet is effective in training non‐speech and ...

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.