Abstract

In recent years, the push to embrace naturalistic stimuli over artificial designs has enriched what we know about the neural underpinnings of human attention, memory, and communication in real life. Previous work using natural stories scrambled (at the word, sentence, and paragraph level) has revealed a hierarchy of brain regions that organize natural acoustic input at these different timescales. While this approach has advanced our understanding of language processing, many fewer studies to date have explored the neural underpinnings of music perception, let alone music production, in naturalistic settings. In our novel paradigm, we asked expert pianists to play musical pieces, scrambled at different timescales (measure, phrase, section) on a non-ferromagnetic piano keyboard inside the fMRI scanner. This dataset provides unprecedented access to expert musicians’ brains starting from their first exposure to a novel piece and continuing over the course of learning to play it. We found distinct patterns of tuning to musical timescales across several clusters of brain regions (e.g., sensory/motor, parietal, and frontal/memory). We also found that musical predictability impacts functional connectivity between auditory, motor, and higher-order regions during performance. Finally, we applied several machine learning analyses to understand how the brain dynamically represents acoustic and musical features.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.