Abstract

To control our actions efficiently, our brain represents our body based on a combination of visual and proprioceptive cues, weighted according to how (un)reliable—how precise—each respective modality is in a given context. However, perceptual experiments in other modalities suggest that the weights assigned to sensory cues are also modulated “top-down” by attention. Here, we asked whether during action, attention can likewise modulate the weights (i.e., precision) assigned to visual versus proprioceptive information about body position. Participants controlled a virtual hand (VH) via a data glove, matching either the VH or their (unseen) real hand (RH) movements to a target, and thus adopting a ``visual'' or ``proprioceptive'' attentional set, under varying levels of visuo-proprioceptive congruence and visibility. Functional magnetic resonance imaging (fMRI) revealed increased activation of the multisensory superior parietal lobe (SPL) during the VH task and increased activation of the secondary somatosensory cortex (S2) during the RH task. Dynamic causal modeling (DCM) showed that these activity changes were the result of selective, diametrical gain modulations in the primary visual cortex (V1) and the S2. These results suggest that endogenous attention can balance the gain of visual versus proprioceptive brain areas, thus contextualizing their influence on multisensory areas representing the body for action.

Highlights

  • To control our actions efficiently, our brain constructs a multisensory body representation based mainly on a combination of visual and proprioceptive cues (Ghahramani et al 1997; Graziano and Botvinick 2002; Holmes and Spence 2004; Makin et al 2008; Blanke et al 2015)

  • We focused our analysis on a left-lateralized network comprising visual, somatosensory, and multisensory areas identified by our statistical parametric mapping (SPM) results: the left primary visual cortex (V1), the left V5, the left secondary somatosensory cortex (S2), and the left superior parietal lobe (SPL)

  • We used a virtual reality environment to investigate whether endogenous attention can change the weighting of visual versus proprioceptive hand movement cues during action

Read more

Summary

Introduction

To control our actions efficiently, our brain constructs a multisensory body representation based mainly on a combination of visual and proprioceptive cues (Ghahramani et al 1997; Graziano and Botvinick 2002; Holmes and Spence 2004; Makin et al 2008; Blanke et al 2015). We can sometimes choose which of our senses to focus on in a particular context—where to allocate our resources. This has been studied as “crossmodal” or “intersensory” attention (cf Driver and Spence 2000; Rowe et al 2002; Macaluso and Driver 2005; Talsma et al 2010; Tang et al 2016). Studies using perceptual paradigms have shown modulations of brain responses in early sensory levels when participants were

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.