Abstract

Orientation processing in the human brain plays a crucial role in guiding grasping actions toward an object. Remarkably, despite the absence of visual input, the human visual cortex can still process orientation information. Instead of visual input, non-visual information, including tactile and proprioceptive sensory input from the hand and arm, as well as feedback from action-related processes, may contribute to orientation processing. However, the precise mechanisms by which the visual cortices process orientation information in the context of non-visual sensory input and action-related processes remain to be elucidated. Thus, our study examined the orientation representation within the visual cortices by analyzing the blood-oxygenation-level-dependent (BOLD) signals under four action conditions: direct grasp (DG), air grasp (AG), non-grasp (NG), and uninformed grasp (UG). The images of the cylindrical object were shown at +45° or - 45° orientations, corresponding to those of the real object to be grasped with the whole-hand gesture. Participants judged their orientation under all conditions. Grasping was performed without online visual feedback of the hand and object. The purpose of this design was to investigate the visual areas under conditions involving tactile feedback, proprioception, and action-related processes. To address this, a multivariate pattern analysis was used to examine the differences among the cortical patterns of the four action conditions in orientation representation by classification. Overall, significant decoding accuracy over chance level was discovered for the DG; however, during AG, only the early visual areas showed significant accuracy, suggesting that the object's tactile feedback influences the orientation process in higher visual areas. The NG showed no statistical significance in any area, indicating that without the grasping action, visual input does not contribute to cortical pattern representation. Interestingly, only the dorsal and ventral divisions of the third visual area (V3d and V3v) showed significant decoding accuracy during the UG despite the absence of visual instructions, suggesting that the orientation representation was derived from action-related processes in V3d and visual recognition of object visualization in V3v. The processing of orientation information during non-visually guided grasping of objects relies on other non-visual sources and is specifically divided by the purpose of action or recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call