Abstract
In the contexts of language learning and music processing, hand gestures conveying acoustic information visually influence perception of speech and non-speech sounds (Connell et al., 2013; Morett & Chang, 2015). Currently, it is unclear whether this effect is due to these gestures' use of the human body to highlight relevant features of language (embodiment) or the cross-modal mapping between the visual motion trajectories of these gestures and corresponding auditory features (conceptual metaphor). To address this question, we examined identification of the pitch contours of lexical tones and non-speech analogs learned with pitch gesture, comparable dot motion, or no motion. Critically, pitch gesture and dot motion were either congruent or incongruent with the vertical conceptual metaphor of pitch. Consistent with our hypotheses, we found that identification accuracy increased for tones learned with congruent pitch gesture and dot motion, whereas it remained stable or decreased for tones learned with incongruent pitch gesture and dot motion. These findings provide the first evidence that both embodied and non-embodied visual stimuli congruent with the vertical conceptual metaphor of pitch enhance lexical and non-speech tone learning. Thus, they illuminate the influences of conceptual metaphor and embodiment on lexical and non-speech auditory perception, providing insight into how they can be leveraged to enhance language learning and music processing.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.