Abstract

How humans visually select where to grasp objects is determined by the physical object properties (e.g., size, shape, weight), the degrees of freedom of the arm and hand, as well as the task to be performed. We recently demonstrated that human grasps are near-optimal with respect to a weighted combination of different cost functions that make grasps uncomfortable, unstable, or impossible, e.g., due to unnatural grasp apertures or large torques. Here, we ask whether humans can consciously access these rules. We test if humans can explicitly judge grasp quality derived from rules regarding grasp size, orientation, torque, and visibility. More specifically, we test if grasp quality can be inferred (i) by using visual cues and motor imagery alone, (ii) from watching grasps executed by others, and (iii) through performing grasps, i.e., receiving visual, proprioceptive and haptic feedback. Stimuli were novel objects made of 10 cubes of brass and wood (side length 2.5 cm) in various configurations. On each object, one near-optimal and one sub-optimal grasp were selected based on one cost function (e.g., torque), while the other constraints (grasp size, orientation, and visibility) were kept approximately constant or counterbalanced. Participants were visually cued to the location of the selected grasps on each object and verbally reported which of the two grasps was best. Across three experiments, participants were required to either (i) passively view the static objects and imagine executing the two competing grasps, (ii) passively view videos of other participants grasping the objects, or (iii) actively grasp the objects themselves. Our results show that, for a majority of tested objects, participants could already judge grasp optimality from simply viewing the objects and imagining to grasp them, but were significantly better in the video and grasping session. These findings suggest that humans can determine grasp quality even without performing the grasp—perhaps through motor imagery—and can further refine their understanding of how to correctly grasp an object through sensorimotor feedback but also by passively viewing others grasp objects.

Highlights

  • When we try to grasp objects within our field of view, we rarely fail

  • Participant judgements significantly improved in the grasping session compared to the vision session [t(20) = 5.14, p = 5∗10−05; 95% highest density interval (95% HDI) = (8, 19)]

  • In Experiment 2a we replicated the results from Experiment 1 on this subset of conditions (Figure 4B): participants were at chance in the vision session [t(24) = 1.88, p = 0.073; 95% HDI = (-1, 12)], effect size = 0.38, 53% in region of practical equivalence (ROPE)), above chance when physically executing the grasps [t(24) = 7.27, p = 1.7∗10−07; 95% HDI = (18, 33)], and performance in the grasping session was significantly improved compared to the vision session [t(24) = 3.51, p = 0.0018; 95% HDI = (8, 32)]

Read more

Summary

Introduction

When we try to grasp objects within our field of view, we rarely fail. Humans can very effectively use their sense of sight to select where and how to grasp objects. For any given object, there are numerous ways to place our digits on the surface. Consider a simple sphere of 10 cm diameter and ∼300 cm surface area. If we coarsely sample the surface in regions of 3 cm (a generous estimate of the surface of a fingertip) there are approximately 100 surface locations on which to place our digits. Even when considering simple two-digit precision grips, which employ only the thumb and forefinger, there are ∼10,000 possible digit configurations that could be attempted. How do humans visually select which of these configurations is possible and will lead to a stable grasp?

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call