Abstract

Multisensory interactions are ubiquitous in cortex and question whether sensory cortices can be distinctively supramodal i.e., capable of functional selectivity irrespective of the sensory modality of inputs (Pascual-Leone and Hamilton, 2001; Ricciardi and Pietrini, 2011; Voss and Zatorre, 2012). This suggests that visual perceptual learning could benefit from supramodal processing via reverse hierarchy (Ahissar and Hochstein, 2004; Proulx et al., 2012). To test this, novel stimuli were developed consisting of acoustic textures sharing the temporal statistics of visual random dot kinematograms (RDKs). Two groups of participants were trained in a difficult visual coherence discrimination task, with or without sounds, while being recorded with magnetoencephalography (MEG). Participants trained in audiovisual conditions (AV) significantly outperformed visual trainees (V) although they were unaware of their progress. When contrasting post- vs. pre-training MEG data, significant differences in the dynamic pattern and the cortical regions responsive to visual RDKs were observed between the two groups in response to visual RDKs. Specifically, neural activity in multisensory cortices (mSTS) correlated with post-training performances and visual motion area (hMT+) selectively responded to trained coherence levels but only in the AV trainees. The latencies of these effects suggest selective feedback from mSTS to hMT+ possibly mediated by posterior temporal cortices (pSTS). Altogether, we interpret our results in the context of the Reverse Hierarchy Theory of learning (Ahissar and Hochstein, 2004) in which supramodal processing optimizes visual perceptual learning by capitalizing on sensory-invariant representations namely, global coherence levels across sensory modalities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call