Abstract
Speech perception is a multimodal phenomenon, with what we see impacting what we hear. In this study, we examine how visual information impacts English listeners’ segmentation of words from an artificial language containing no cues to word boundaries other than the transitional probabilities (TPs) between syllables. Participants (N = 60) were assigned to one of three conditions: Still (still image), trochaic (image loomed toward the listener at syllable onsets), or Iambic (image loomed toward the listener at syllable offsets). Participants also heard either an easy or difficult variant of the language. Importantly, both languages lacked auditory prosody. Overall performance in a 2AFC test was better in the easy (67%) than difficult language (57%). In addition, across languages, listeners performed best in the trochaic condition (67%) and worst in the Iambic condition (56%). Performance in the still condition fell in between (61%). We know English listeners perceive strong syllables as word onsets. Thus, participants likely found the Trochaic Condition easiest because the moving image led them to perceive temporally co-occurring syllables as strong. We are currently testing 6-year-olds (N = 25) with these materials. Thus far, children’s performance collapsed across conditions is similar to adults (60%). However, visual information may impact children's performance less.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.