Abstract

Recent experiments examining where participants look when grasping an object found that fixations favor the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object’s function and center of mass (COM) location, these investigations have generally used simple symmetrical objects – where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object’s shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction.

Highlights

  • We move about and interact with objects in our environment so effortlessly that the complexities of these interactions are rarely noticed

  • Fixation locations were significantly more to the left (M = 0.13 cm to the right of the center, SE = 0.23) when grasping the COM shifted to the left (COML) objects when compared to the COM shifted to the right (COMR) objects (M = 0.68 cm to the right of the center, SE = 0.25; p < 0.001)

  • No significant differences between first and second fixation locations along the X-axis were apparent in any object category (COML, COMR, symmetrical objects)

Read more

Summary

Introduction

We move about and interact with objects in our environment so effortlessly that the complexities of these interactions are rarely noticed. The integration of various senses, such as visual and tactile feedback when locating and picking up objects and vestibular information for balance (for review see Kandel et al, 2000), plays key roles in our interactions, we primarily rely on our sense of vision to accurately carry out our movements, with eye movements typically preceding hand movements in both pointing (Abrams et al, 1990; Bekkering et al, 1994; van Donkelaar et al, 2004) and object manipulation tasks (Land et al, 1999; Johansson et al, 2001; Land and Hayhoe, 2001; Hayhoe et al, 2003; Hayhoe and Ballard, 2005). Research has shown that eye movements are typically initiated toward the object 40–100 ms prior to movement onset (Prablanc et al, 1979; Biguer et al, 1982; Land et al, 1999) with fixations linked to where participants grasp an object (i.e., they look at the location where they place their index finger during a precision grasp; Brouwer et al, 2009; Desanghere and Marotta, 2011; Prime and Marotta, 2013), and, when manipulating the objects, linked to forthcoming grasp sites, obstacles, and landing sites where objects are subsequently grasped, moved around, and placed, respectively (Johansson et al, 2001)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call