Abstract

Grasping movements are typically performed toward visually sensed objects. However, planning and execution of grasping movements can be supported also by haptic information when we grasp objects held in the other hand. In the present study we investigated this sensorimotor integration process by comparing grasping movements towards objects sensed through visual, haptic or visuo-haptic signals. When movements were based on haptic information only, hand preshaping was initiated earlier, the digits closed on the object more slowly, and the final phase was more cautious compared to movements based on only visual information. Importantly, the simultaneous availability of vision and haptics led to faster movements and to an overall decrease of the grip aperture. Our findings also show that each modality contributes to a different extent in different phases of the movement, with haptics being more crucial in the initial phases and vision being more important for the final on-line control. Thus, vision and haptics can be flexibly combined to optimize the execution of grasping movement.

Highlights

  • In everyday life, actions are directed toward objects we see, and toward objects we already hold in one hand

  • When haptic information about the target position is complemented by the visual information, the end points of reaching movements are more accurate and more precise than in the conditions based on a single sensory modality, supporting the idea that multisensory information can be efficiently combined during sensorimotor processing[8,10,17,18,19,20]

  • The VH-V and VH-H comparisons showed that participants had a shorter movement duration (MD) in VH compared to V (Fig. 3a) and a smaller maximum grip aperture (MGA) in VH compared to each unisensory conditions (Fig. 3b)

Read more

Summary

Introduction

Actions are directed toward objects we see, and toward objects we already hold in one hand. In contrast to reaching, grasping actions toward a haptically sensed object need to be based on extrinsic properties (i.e., object’s location), and on intrinsic properties of the object (i.e., size)[21,22,23,24,25,26]. These properties can be acquired through haptics. Haptic information may have been completely ignored and movements were planned and executed by relying on visual information alone

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.