Abstract

We propose a new framework to understand singing accuracy, based on multi-modal imagery associations: the MMIA model. This model is based on recent data suggesting a link between auditory imagery and singing accuracy, evidence for a link between imagery and the functioning of internal models for sensorimotor associations, and the use of imagery in singing pedagogy. By this account, imagery involves automatic associations between different modalities, which in the present context comprise associations between pitch height and the regulation of vocal fold tension. Importantly, these associations are based on probabilistic relationships that may vary with respect to their precision and accuracy. We further describe how this framework may be extended to multi-modal associations at the sequential level, and how these associations develop. The model we propose here constitutes one part of a larger architecture responsible for singing, but at the same time is cast at a general level that can extend to multi-modal associations outside the domain of singing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call