Abstract

Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

Highlights

  • Encoding and decoding models were first introduced in the visual and semantic domains[1,2,3,4]

  • We varied the number of voxels taken from a priori regions, including Heschl’s gyrus and superior and middle temporal gyrus, and determined identification accuracy in the ongoing music piece by adding time points to each musical excerpt

  • We investigated functional magnetic resonance imaging (fMRI) brain responses to 40 musical pieces of various genres with a two-stage encoding-decoding model that provided a description of the temporal evolution and spatial location of voxels belonging to auditory cortices critical for identifying musical pieces

Read more

Summary

Introduction

Encoding and decoding models were first introduced in the visual and semantic domains[1,2,3,4]. In contrast to auditory models for spectro-temporal receptive fields, which are used for encoding natural sounds such as animal cries and environmental sounds[7], here we employ musical features comprising low-level as well as higher-level characteristics of musical dimensions (tonality, dynamics, rhythm, timbre8) and that have been thoroughly validated in behavioural studies of music perception[9,10,11]. Such musical features have been explored using computational models to investigate brain responses to naturalistic musical stimuli using fMRI12–14. With our choice of various musical styles, we can explicitly test the generalization ability of our model

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call