Abstract

The simple act of viewing and grasping an object involves complex sensorimotor control mechanisms that have been shown to vary as a function of multiple object and other task features such as object size, shape, weight, and wrist orientation. However, these features have been mostly studied in isolation. In contrast, given the nonlinearity of motor control its computations require multiple features to be incorporated concurrently. Therefore, the present study tested the hypothesis that grasp computations integrate multiple task features superadditively in particular when these features are relevant for the same action phase. We asked male and female human participants to reach-to-grasp objects of different shapes and sizes with different wrist orientations. Also, we delayed movement onset using auditory signals to specify which effector to use. Using electroencephalography (EEG) and representative dissimilarity analysis to map the time course of cortical activity we found that grasp computations formed superadditive integrated representations of grasp features during different planning phases of grasping. Shape-by-size representations and size-by-orientation representations occurred before and after effector specification, respectively, and could not be explained by single-feature models. These observations are consistent with the brain performing different preparatory, phase-specific computations; visual object analysis to identify grasp points at abstract visual levels and downstream sensorimotor preparatory computations for reach-to-grasp trajectories. Our results suggest the brain adheres to the needs of nonlinear motor control for integration. Furthermore, they show that examining the superadditive influence of integrated representations can serve as a novel lens to map the computations underlying sensorimotor control.Significance Statement The nonlinearity of the sensorimotor control of grasping should require computations to incorporate multiple task features such as object shape, size, and orientation concurrently. However, grasp research so far has primarily investigated the influences of task features in isolation. In contrast, integrated representations of task features have been studied in cognitive paradigms showing that multiple visual and action features are joined together in abstract representations based on working memory or priming effects called events files. Using multivariate analysis of EEG, here we observe a new form of integrated representations of task features for grasping that cannot be explained by single-feature models or event files. Our approach offers novel insights into the preparatory processes of sensorimotor grasp control.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call