Abstract

Previous behavioural studies have shown that repeated presentation of a randomly chosen acoustic pattern leads to the unsupervised learning of some of its specific acoustic features. The objective of our study was to determine the neural substrate for the representation of freshly learnt acoustic patterns. Subjects first performed a behavioural task that resulted in the incidental learning of three different noise-like acoustic patterns. During subsequent high-resolution functional magnetic resonance imaging scanning, subjects were then exposed again to these three learnt patterns and to others that had not been learned. Multi-voxel pattern analysis was used to test if the learnt acoustic patterns could be ‘decoded’ from the patterns of activity in the auditory cortex and medial temporal lobe. We found that activity in planum temporale and the hippocampus reliably distinguished between the learnt acoustic patterns. Our results demonstrate that these structures are involved in the neural representation of specific acoustic patterns after they have been learnt.

Highlights

  • As humans we are constantly bombarded with sounds, many of which can be identified and assigned a semantic label

  • Since the training phase and the testing phase used different paradigms, the functional magnetic resonance imaging (fMRI) data from the training phase were not analysed, and we focused on our main question of where the learned reference tone cloud (RefTC) were represented

  • As in Agus et al [5], the behavioural data in our study revealed better performance on repeated exemplars compared to performance on non-repeated exemplars confirming learning of acoustic patterns

Read more

Summary

Introduction

As humans we are constantly bombarded with sounds, many of which can be identified and assigned a semantic label. After training in the MRI scanner, subjects were tested in a single session consisting of 20 trials each for the three trained RefTC exemplars randomly presented with 60 trials of RTC and 120 trials of NTC (which were generated anew). The overall classification procedure involved splitting the fMRI data into two segments: a ‘training’ set used to train a classifier with fixed regularization hyperparameter C 1⁄4 1, and a ‘test’ set used to independently test the classification performance using a standard 10-fold cross-validation testing procedure This generated 10 sets of SVM training and test sets that produced overall classification accuracy from the proportion of 4 correct classification ‘guesses’ across all 10-folds of the crossvalidation. Since the training phase and the testing phase used different paradigms (during training separate RefTCs were presented in different blocks and in testing the three learned exemplars of RefTCs were presented in the same block), the fMRI data from the training phase were not analysed, and we focused on our main question of where the learned RefTCs were represented

Results
Discussion
STS region
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.